Camera doesn't capture anythin in AirSim - unreal-engine4

in my code i have the following line
imgs = client.simGetImages([
airsim.ImageRequest("0", airsim.ImageType.Scene, False, False)], vehicle_name = name)
imgs doesn't contain anything. Do we need to enable some camera in AirSim ? I am using Unreal Engine 4. Please help me out as very less documentation is available on the web regarding airsim

Did you modify the setting file to give specific description about the camera?
Or did you hit the button to run the simulation?

Related

ARAnchorManager.HostCloudAnchor(anchor) returns null — ARCore Extensions for AR Foundation

Calling ARCloudAnchor cloudAnchor = manager.HostCloudAnchor(anchor) gives null for cloudAnchor (where manager is of type ARAnchorManager and anchor is of type ARAnchor). I have the API key set up for ARCore Extensions with the GCP server. Help is much appreciated.
Maybe your feature map quality is not good. Try to call manager.EstimateFeatureMapQualityForHosting(GetPoseCamera()) to check the quality
INSUFFICIENT: it is not good enough to resolve cloud anchor later (host anchor can be failed with this quality and you may receive NULL for cloud anchor) -> try to move device around object.
SUFFICIENT: it is okay
GOOD: it is good
Note: You have to define GetPoseCamera() function. (it is easy, just includes position and rotation of camera)
Took me a while, but I fixed this.
Even though the ARCore Extensions sample gave cloudAnchor.cloudAnchorState == CloudAnchorState.Success immediately when calling manager.HostCloudAnchor, I got cloudAnchor.cloudAnchorState == CloudAnchorState.TaskInProgress which made cloudAnchor == null give true. I needed to loop until the state was Success (which took about 5 seconds each time). After the wait, the anchors are hosted without a hitch.

what is proper way to change properties/generated code of custom HID in STM32CubeIDE

I`m trying to create custom HID device with STM32F103C8, IDE that i choose is STM32CubeIDE and the tutorial that i was following is at ST youtube official channel.
ST offers great tool "Device configuration tool" where i can configure microcontroler, and a lot of code based on my configuration will be generated. That generated code has "user code parts" where user creates his logic, and if he needs to reconfigure microcontroller "Device configuration tool" will not remove that parts of code.
Problem:
To configure custom usb HID i need to change code generated by "Device configuration tool" in places where is no place for user code and that changes will be removed if i run "Device configuration tool" again.
Fields that i only can set in "Device configuration tool" are this:
But that is not enough i also need to change CUSTOM_HID_EPIN_SIZE and CUSTOM_HID_EPOUT_SIZE defines which represent amount of bytes device and host send to each other at once, and also if i change the size of "data pack" i will need to change the default generated callback function that receives that data and works with it, for example the tool generates code like this:
{
USBD_CUSTOM_HID_HandleTypeDef *hhid = (USBD_CUSTOM_HID_HandleTypeDef *)pdev->pClassData;
if (hhid->IsReportAvailable == 1U)
{
((USBD_CUSTOM_HID_ItfTypeDef *)pdev->pUserData)->OutEvent(hhid->Report_buf[0],
hhid->Report_buf[1]);
hhid->IsReportAvailable = 0U;
}
return USBD_OK;
}
but i need the pointer to "Report_buf" not the copy of its first 2 elements, and the default generated code pass only copy of 2 first bytes, and i cant change this in "Device configuration tool".
My current solution:
Actually i solved this issue, but i don`t think i solved it the right way and it works. I have changed the template files which are here "STM32CubeIDE_1.3.0\STM32CubeIDE\plugins\com.st.stm32cube.common.mx_5.6.0.202002181639\db\templates"
And also changed files at "STM32CubeIDE_1.3.0\en.stm32cubef1.zip_expanded\STM32Cube_FW_F1_V1.8.0\Middlewares\ST\STM32_USB_Device_Library\Class\HID"
I don`t think this is the right way to do it, does any one know the right way to do this thing ?
I also found same question on ST forum here but it was not resolved.
what you want to achieve is exactly what is explained by st trainer on this link.
https://www.youtube.com/watch?v=3JGRt3BFYrM
Trainer is step by step explaining how to change the code to use the pointer to buffer
if (hhid->IsReportAvailable == 1U)
{
((USBD_CUSTOM_HID_ItfTypeDef *)pdev->pUserData)->OutEvent(hhid->Report_buf);
hhid->IsReportAvailable = 0U;
}

Kinect V2 - Loading XEF files recorded in Kinect Studio, accessing the Color and Depth frames

I need to get the Color and Depth frames from an XEF file recorded using Kinect Studio.
My code for accessing the Color and Depth frames when using the Kinect directly looks like this:
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
_sensor.Open();
_reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color | FrameSourceTypes.Depth | FrameSourceTypes.Infrared | FrameSourceTypes.Body);
_reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;
_coordinateMapper = _sensor.CoordinateMapper;
}
In private void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e) I do my magic, which works.
Now how do I go about that using a pre-recorded XEF file?
I got that I can load an XEF file like this:
var kStudioClient = KStudio.CreateClient();
var eventFile = kStudioClient.OpenEventFile(#"D:\Kinect Studio Recordings\20170922_083134_00.xef");
But how can I get a MultiSourceFrame from that?
Any help is greatly appreciated! Thanks!
You are on the right track with the KStudioClient API. If you haven't implemented it yourself already, there is also a KStudioPlayback class you should use to play back XEF clips asynchronously. I will not explain and give you exact code how to playback at this stage - the API is very easy to understand. Correct usage of this class will issue MultiSourceFrameArrived events automatically, so you do now need to change the way you handle them.
Here is everything you need to know to get up to speed with the KStudioPlayback class - KStudioPlayback class API. If you need code samples, post a comment, and I will get back to you.

How to persist markers using ADF in TangoARPoseController

I am trying to persist markers in an augmented reality game. Here is the gist of what I am doing:
I have my users recording and saving an area to an ADF. Then they drop marker’s into the scene and save out their position data in Unity World coordinates to a text file. I then restart the app, load and localize to the ADF and load the markers.
In order to get this working, I've modified the ARPoseController.cs file in the Unity demo package to use the Area Description as it's base frame. In the _UpdateTransformation method I've swapped out the frame pairs
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
for
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
I've also added some code confirming that I'm successfully localizing to the ADF, but I'm noticing that my markers position in Unity World Space do not position properly relative to real environment.
I can confirm that my markers save and load properly based on START_OF_SERVICE origin so I assume that they are properly serializing and deserializing. What could be causing this? Am I wrong in assuming this should just work by switching the base framepair to Area_Description instead of START_OF_SERVICE?
I had a similar problem getting the AR and ADF integrated, I had to modify the TangoPointCloud to check if you're using an AreaDescription in OnTangoDepthAvailable() and adjust the baseFrame target as required.
i.e.:
if (m_tangoDeltaPoseController.m_useAreaDescriptionPose)
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
else
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
That way, the geometry of the point cloud adjusts itself based on the ADF offset instead of from device start.
After that change, when I'm using the sample code for AR to drop markers, it registers the surface properly so I'm placing the markers in the correct spots and orientation. I'm still encountering some flakiness with the markers not adjusting when relocalized though, have to look into the AreaLearningInGameController for loop closure events.
Hope that helps!

SWIFT - Is it possible to save audio from AVAudioEngine, or from AudioPlayerNode? If yes, how?

I've been looking around Swift documentation to save an audio output from AVAudioEngine but I couldn't find any useful tip.
Any suggestion?
Solution
I found a way around thanks to matt's answer.
Here a sample code of how to save an audio after passing it through an AVAudioEngine (i think that technically it's before)
newAudio = AVAudioFile(forWriting: newAudio.url, settings: nil, error: NSErrorPointer())
//Your new file on which you want to save some changed audio, and prepared to be bufferd in some new data...
var audioPlayerNode = AVAudioPlayerNode() //or your Time pitch unit if pitch changed
//Now install a Tap on the output bus to "record" the transformed file on a our newAudio file.
audioPlayerNode.installTapOnBus(0, bufferSize: (AVAudioFrameCount(audioPlayer.duration)), format: opffb){
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
if (self.newAudio.length) < (self.audioFile.length){//Let us know when to stop saving the file, otherwise saving infinitely
self.newAudio.writeFromBuffer(buffer, error: NSErrorPointer())//let's write the buffer result into our file
}else{
audioPlayerNode.removeTapOnBus(0)//if we dont remove it, will keep on tapping infinitely
println("Did you like it? Please, vote up for my question")
}
}
Hope this helps !
One issue to solve:
Sometimes, your outputNode is shorter than the input: if you accelerate the time rate by 2, your audio will be 2 times shorter. This is the issue im facing for now since my condition for saving the file is (line 10)
if(newAudio.length) < (self.audioFile.length)//audiofile being the original(long) audio and newAudio being the new changed (shorter) audio.
Any help here?
Yes, it's quite easy. You simply put a tap on a node and save the buffer into a file.
Unfortunately this means you have to play through the node. I was hoping that AVAudioEngine would let me process one sound file into another directly, but apparently that's impossible - you have to play and process in real time.
Offline rendering Worked for me using GenericOutput AudioUnit. Please check this link, I have done mixing two,three audios offline and combine it to a single file. Not the same scenario but it may help you for getting some idea. core audio offline rendering GenericOutput