I have a little project and I need to store wifi config in stm32 but I don't want to use EEPROM so I decided to use STM32F103C8's flash memory.
https://www.youtube.com/watch?v=Qt3T7ij9qYY in this video this guy uses STM32F4** series. At blue arrow it shows flash_erase_sector. This function is defined in stm32f4**_hal_flash_ex.h. S I wonder can I integrate this function to stm32f103c8?
Related
I am new to the MATLABS platform and am trying to understand how to display the 3D visualisation with graphs shown at 11:40 and at 18:10 in the video below, I downloaded and ran the example code that he provided but only the two spectrograms are operning.
If anyone can point me in the right direction, it would be very much appreciated.
Thanks.
https://uk.mathworks.com/videos/radar-system-modeling-and-simulation-for-automotive-advanced-driver-assistance-systems-107121.html?elqsid=1551737420797&potential_use=Student
As per the voice over at about 11:32, they have run the first simulation model, which is set up to write data to the MATLAB Workspace, and then to generate the virtual reality simulation they are running a second model.
From the video, at about 11:36, the second model appears to be called VR_Simulation. This second model almost certainly requires the Simulink 3D Animation tools.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to create an hologram that is exported via the kinect to an hololense. But it's very slow.
I use this tutorial to collect point cloud data, and this library to export my data as a 3D object in .obj format. The library that exports obj doesnt accept points so I had to draw little triangles. I save the files .obj .png and .mtl on my local xampp.
Next, I download the files with a unity script and WWW object. I also use Runtime OBJ Importer from unity's asset store to create a 3D object at runtime.
The last part is to export the unity app on a hololense. (I will do it next).
But before that,
The process is working but is very slow. I want the hologram to be fluid. A lot of time is wasted :
take depth and rgb data of the kinect
export data to an obj png and mtl file
download the files on unity as frequent as possible
render the files
I think of streaming but does unity need a complet obj file to render ? If I compress .png to .jpg will a gain some time ?
Do you have some pointers to help me ?
Currently the way your question is phrased is confusing: it's unclear wether you want to record point clouds that you later load and render in Unity or you want to somehow stream the point cloud with the aligned RGB texture in close to realtime to Unity.
You initial attempts are using Processing.
In terms of recording data, I recommend using the SimpleOpenNI library which can record both depth and RGB data to an .oni file (see the RecorderPlay example).
Once you have a recording, you can loop through each frame and for each frame store the vertices to a file.
In terms of saving to .obj you'll need to convert the point cloud to a mesh (triangulate the vertices in 3D).
Another option would be to store the point cloud to a format like .ply.
You can find more info on writing to a .ply file in Processing in this answer
In terms of streaming the data, this will be complicated:
if you stream all the vertices it that's a lot of data: up to 921600 floats ( (640 x 480 = 307200) * 3)
if you stream both depth (11bit 640x480) and RGB (8bit 640x480) images that will be even more data.
One option might be to only send the vertices that have a depth and overall skipping points (e.g. send every 3rd point). In terms of sending the data you can try OSC
Once you get the points in Unity you should be able to render a point cloud in Unity
What would be ideal in terms of network performance is a codec (compressor/decompressor) for the depth data. I haven't used one thus far, but doing a quick I see there are options like this one(very date).
You'll need to do a bit of research and see what kinect v1 depth streaming libraries are out there already, test and see what works best for your scenario.
Ideally, if the library is written in C# there's a chance you'll be able to use it to decode the received in Unity.
I am using Dirac LE. i have tried to change the pitch of a sound but got the following error in console..
Can Dirac LE convert the pitch of a sound audio? I know i can only use one channel.
yes you can change the pitch.
please be aware that you need to create and destroy the dirac object when you change something
so:
create the dirac instance with DiracCreate (or the interleaved version)
set the pitch/tempo and other settings
do the processing with DiracProcess
destroy the dirac instance with DiracDestroy
if you want to change a setting you need to create/destroy the dirac instance in the LE version (calling DiracReset is not enough)
oh, and you can process multiple channels by having multiple dirac instances, however these are not linked/synced and this can cause small variations in left/right channel, resulting in an unstable stereo image
I plan to get the x-y coordinated reading from an optical mouse. Basically, I want the readout to be something like this but instead of using a Wiimote, I'll be using a standard USB mouse.
I have found quite a good example - [http://www.synbio.org.uk/component/content/article/46-instrumentation-news/1234-interfacing-an-optical-mouse-sensor-to-your-arduino.html]
but sadly it runs on C++ and needs a bridging hardware called Arduino. Though however, it PRINTS out the coordinate rather than putting it into a 'real-time' graphical plot.
I would love if MATLAB can read off the coordinates from the mouse and plot a 'real-time' graph of the coordinates.
Thanks :)
If you have a DLL or DLLs for communicating with your mouse you should be able to persuade Matlab to use it or them. Look at the documentation for loadlibrary.
(Of course, if you are using Linux or Mac OS X replace 'DLL' with 'shared library'.)
Is there a way to create a heat map in google earth, so areas with higher values (of some specified parameter, such as population) appear as hotspots?
This seems possible.
For instance, take a look at those few links :
Disclaimer : I've tried none of those
HeatMapAPI.com
And an example
But I'm not sure how you'd do it ; seems related to .NET and a dll in some way... so might not be as nice as it seems...
Density Mapping in Google Maps with HeatMapAPI
Heat Maps for Google Maps - (a.k.a GeoIQ mashup)
Using Google Maps to Produce Heat Maps
You've got a couple of links in those articles too ; some might be interesting too.
My colleague developed an open source java program that will generate 3D heat maps (KML) files for Google Earth from simply formatted XML data files. It may be of use. The entire project code is up at https://github.com/Noblis/OSAT You can ignore the bulk of what's there, and focus on GUIMain and the supporting files. There's sample files and documentation. I'd call it about a 0.5 version - it works, we used it in our studies, but there's some rough edges. It was done for transportation accessibility studies, but you can change the parameters you're graphing to anything you want, run from command line, whatever.
You can use the vertical axis to either view the same parameter as is used for the color OR use it to map an entirely different variable.
Here's two screen shots so you can see what it does:
tool interface:
example 3D output:
You can create polygons in a KML file and set the color of them. You can also make the polygons 3D, with height perhaps representing temperature.
There is also http://www.openheatmap.com, which offers free heatmaps on top of OpenStreetMap from a CSV upload.
Try free API heat maps. A really interesting implementation : http://en.tixik.com/tools/heatmaps
HeatmapTool.com can take a CSV file of coordinates and intensity values to generate heat map tiles for Google Maps.