Can QGIS use forthcoming CA - QL1 Lidar for the Carr...Fire - qgis

I don't use QGIS, yet. It looks like a huge investment in time and energy to learn how to use it. Before making that investment, I would like to know if it will meet my needs?
I want to browse the terrain and intensity data from the forthcoming USGS dataset: "CA - QL1 Lidar for the Carr, Hirz, and Delta Fire Impacted Watersheds".
Would you recommend using QGIS for this?
What basic workflow would you recommend.
Any other recommendations?
Thanks

Related

fast image annotation for large dataset

I have a dataset related to plant disease. the dataset has 70k images and 38 classes dataset and I want to annotate images with a bounding box. I try for annotation LabelImg and other annotation tools but all of them are so time-consuming. is there any other technique for image annotation which is fast, simplest, and automatic or can I do with programming?
please help me out to solve this problem.
I used hasty.ai on a recent project. Their tool has an AI which observes you while you label. After learning on a few images only it suggests annotations which you can accept or correct (and by that retrain the model). Once the AI makes good annotations, you can batch process all your remaining images. It allowed me to annotate a huge dataset within a few days only.
You can use it for free in the beginning! I switched to pro, then, as it was worth the money and the price is reasonable.

Multiple object tracking using radar data and extended kalman filter

thanks in advance.
I am new to the multiple object tracking field. So, I have been working on this for a couple of days. I have developed my first version of a single object tracker using an extended Kalman filter. I am estimating position, velocity by assuming a constant acceleration model. Now my question is how can I convert the existing model for multiple objects tracking. The main problem is I am using radar data. So, I am not able to get the references for developing the tracker. So, One good example or steps to achieve can help me in understanding the concept.
The answer to this question depends on a lot of things. For example, how much control and knowledge do you have over the whole system? If you know how many targets you need to track you can add all of them to the Kalman Filter state and for every measurement you perform data association to find out to which object a given measurement belongs. An easy association metric would be nearest neighbor.
If you don't know how many targets there will be you will want to implement a track management where each target you are tracking represents a track and you can model birth and death probabilities of targets.
Multi Target Tracking is a vast field and if you want to have an in-depth mathematical introduction I would recommend the 2015 survey paper "Multitarget Tracking" by Ba-Ngu Vo et al. You should be able to find a preprint pdf online.
If you are looking more for a lightweight tutorial I would assume it should be possible to find some tutorial or example code online where to start. As mentioned in the first paragraph, nearest neighbor association for a fixed amount of objects might be a good first step.

Can one use SITL (Software In The Loop) for a custom designed drone?

http://python.dronekit.io/develop/sitl_setup.html
there's a. mentioning that SITL is there for only a few pre-built vehicles. suppose i am designing a completely new model of flying vehicle, can i still simulate that?
Thanks a lot:) My drone design just has another set of mathematical expressions for its roll, pitch and yaw control. So, technically I need changes only in the AP_motors.cpp file (here the commands are converted into motor PWM values) only right? or is it more than that? Please guide me.
You can actually, but at this time there is no detailed documentation to do so. At the moment the best bet is by using RealFlight 8 to design the vehicle (and also to simulate the physics) and then use SITL to control it. You can check how to connect RealFlight to SITL at here http://ardupilot.org/dev/docs/sitl-with-realflight.html. There is also a discussion about building a custom model for RealFlight 8 with SITL at here https://discuss.ardupilot.org/t/building-realflight8-models/23106/34.

Creating music visualizer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So how does someone create a music visualizer? I've looked on Google but I haven't really found anything that talks about the actual programming; mostly just links to plug-ins or visualizing applications.
I use iTunes but I realize that I need Xcode to program for that (I'm currently deployed in Iraq and can't download that large of a file). So right now I'm just interested in learning "the theory" behind it, like processing the frequencies and whatever else is required.
As a visualizer plays a song file, it reads the audio data in very short time slices (usually less than 20 milliseconds). The visualizer does a Fourier transform on each slice, extracting the frequency components, and updates the visual display using the frequency information.
How the visual display is updated in response to the frequency info is up to the programmer. Generally, the graphics methods have to be extremely fast and lightweight in order to update the visuals in time with the music (and not bog down the PC). In the early days (and still), visualizers often modified the color palette in Windows directly to achieve some pretty cool effects.
One characteristic of frequency-component-based visualizers is that they don't often seem to respond to the "beats" of music (like percussion hits, for example) very well. More interesting and responsive visualizers can be written that combine the frequency-domain information with an awareness of "spikes" in the audio that often correspond to percussion hits.
For creating BeatHarness ( http://www.beatharness.com ) I've 'simply' used an FFT to get the audiospectrum, then use some filtering and edge / onset-detectors.
About the Fast Fourier Transform :
http://en.wikipedia.org/wiki/Fast_Fourier_transform
If you're accustomed to math you might want to read Paul Bourke's page :
http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/dft/
(Paul Bourke is a name you want to google anyway, he has a lot of information about topics you either want to know right now or probably in the next 2 years ;))
If you want to read about beat/tempo-detection google for Masataka Goto, he's written some interesting papers about it.
Edit:
His homepage : http://staff.aist.go.jp/m.goto/
Interesting read : http://staff.aist.go.jp/m.goto/PROJ/bts.html
Once you have some values for for example bass, midtones, treble and volume(left and right),
it's all up to your imagination what to do with them.
Display a picture, multiply the size by the bass for example - you'll get a picture that'll zoom in on the beat, etc.
Typically, you take a certain amount of the audio data, run a frequency analysis over it, and use that data to modify some graphic that's being displayed over and over. The obvious way to do the frequency analysis is with an FFT, but simple tone detection can work just as well, with a lower lower computational overhead.
So, for example, you write a routine that continually draws a series of shapes arranged in a circle. You then use the dominant frequencies to determine the color of the circles, and use the volume to set the size.
There are a variety of ways of processing the audio data, the simplest of which is just to display it as a rapidly changing waveform, and then apply some graphical effect to that. Similarly, things like the volume can be calculated (and passed as a parameter to some graphics routine) without doing a Fast Fourier Transform to get frequencies: just calculate the average amplitude of the signal.
Converting the data to the frequency domain using an FFT or otherwise allows more sophisticated effects, including things like spectrograms. It's deceptively tricky though to detect even quite 'obvious' things like the timing of drum beats or the pitch of notes directly from the FFT output
Reliable beat-detection and tone-detection are hard problems, especially in real time. I'm no expert, but this page runs through some simple example algorithms and their results.
Devise an algorithm to draw something interesting on the screen given a set of variables
Devise a way to convert an audio stream into a set of variables analysing things such as beats/minute frequency different frequency ranges, tone etc.
Plug the variables into your algorithm and watch it draw.
A simple visualization would be one that changed the colour of the screen every time the music went over a certain freq threshhold. or to just write the bpm onto the screen. or just displaying an ociliscope.
check out this wikipedia article
Like suggested by #Pragmaticyankee processing is indeed an interesting way to visualize your music. You could load your music in Ableton Live, and use an EQ to filter out the high, middle and low frequencies from your music. You could then use a VST follwoing plugin to convert audio enveloppes into MIDI CC messages, such as Gatefish by Mokafix Audio (works on windows) or PizMidi’s midiAudioToCC plugin (works on mac). You can then send these MIDI CC messages to a light-emitting hardware tool that supports MIDI, for instance percussa audiocubes. You could use a cube for every frequency you want to display, and assign a color to the cube. Have a look at this post:
http://www.percussa.com/2012/08/18/how-do-i-generate-rgb-light-effects-using-audio-signals-featured-question/
We have lately added DirectSound-based audio data input routines in LightningChart data visualization library. LightningChart SDK is set of components for Visual Studio .NET (WPF and WinForms), you may find it useful.
With AudioInput component, you can get real-time waveform data samples from sound device. You can play the sound from any source, like Spotify, WinAmp, CD/DVD player, or use mic-in connector.
With SpectrumCalculator component, you can get power spectrum (FFT conversion) that is handy in many visualizations.
With LightningChartUltimate component you can visualize data in many different forms, like waveform graphs, bar graphs, heatmaps, spectrograms, 3D spectrograms, 3D lines etc. and they can be combined. All rendering takes place through Direct3D acceleration.
Our own examples in the SDK have a scientific approach, not really having much entertainment aspect, but it definitely can be used for awesome entertainment visualizations too.
We have also configurable SignalGenerator (sweeps, multi-channel configurations, sines, squares, triangles, and noise waveforms, WAV real-time streaming, and DirectX audio output components for sending wave data out from speakers or line-output.
[I'm CTO of LightningChart components, doing this stuff just because I like it :-) ]

What is the best way to visualize abstract concepts (algorithm/data structure)?

What's the best way to "see what is happening" in an algorithm/data structure? If it's something like a binary search I just imagine a bunch of boxes in a row, and throwing half of them out each time. Is there something more powerful that will let us grok something as abstract as an algorithm/data structure?
Clarification: I'm looking for something a little more general. Example: in order to visualize time - some people use a clock in there head but thats slow, whereas a more natural feel would be a globe and if you are trying to get a 'feel' for how an algorithm works you can imagine two objects moving in different directions on that globe.
In general, animations are excellent for visualizing processes that occur over time, such as the execution of algorithms.
For example, check out these animations: Animated Sort Algorthms
Here's an animation that shows how the data structures work on MergeSort.
Now, whether you want to spend time hooking up your algorithm to some kind of animated visualization is a different question!
Algorithm animation was a big research area in the 1990s. Marc H. Brown, who was then at the Digital Systems Research Center, did a large amount of interesting work. The source code to his Zeus animation system is still available, and it would not be hard to get it set up. I played with Zeus years ago and if I remember correctly it ships with dozens of animations.
They had a couple of 'festivals' where non-experts got together to animate new algorithms. You can see one of the reports (with still images) on the 1993 festival. YouTube has one of their videos Visualizing Combinatorial Structures.
Describing something in terms of another thing is called analogy. You just did it with the binary search being a bunch of boxes. Just play with the student's prior knowledge.
For instance, trees can be thought of linked-lists, with multiple "next" nodes, or they could be explained to the uninitiated as something similar to a hierarchy.
As for concrete methods of explanation, graphs and state machines can be easily visualized with Graphviz. A very basic, directed graph can be expressed very simply:
digraph G {
A->B;
B->C;
C->D;
D->B;
}