I have a Unclassified las File and I want to classify It.
I am successfully Classify Las Files as Ground Vs Non Ground But i want classification with further classes like buildings,Power Lines, vegetation,
vehicles, and water.As required unit case is , I have to classify this in given point cloud.
Is There any OPEN SOURCE tools For LINUX Such Kind OF CLASSIFICATION , Or any further idea or help will be highly appreciated.
I Also Try Canupo But Its Command Line Version Is Quiet OLD And There is No Support .
I think you finding a open source software for classification is
http://www.opentopography.org/community/contribute
you have to register yourself and then you can contribute in source code.
Related
I know Matlab has the function TrainAutoencoder(input, settings) to create and train an autoencoder. The result is capable of running the two functions of "Encode" and "Decode".
But this is only applicable to the case of normal autoencoders. What if you want to have a denoising autoencoder? I searched and found some sample codes, where they used the "Network" function to convert the autoencoder to a normal network and then Train(network, noisyInput, smoothOutput)like a denoising autoencoder.
But there are multiple missing parts:
How to use this new network object to "encode" new data points? it doesn't support the encode().
How to get the "latent" variables to the features, out of this "network'?
I appreciate if anyone could help me resolve this issue.
Thanks,
-Moein
At present (2019a), MATALAB does not permit users to add layers manually in autoencoder. If you want to build up your own, you will have start from the scratch by using layers provided by MATLAB;
In order to to use TrainNetwork(...) to train your model, you will have you find out a way to insert your data into an object called imDatastore. The difficulty for autoencoder's data is that there is NO label, which is required by imDatastore, hence you will have to find out a smart way to avoid it--essentially you are to deal with a so-called OCC (One Class Classification) problem.
https://www.mathworks.com/help/matlab/ref/matlab.io.datastore.imagedatastore.html
Use activations(...) to dump outputs from intermediate (hidden) layers
https://www.mathworks.com/help/deeplearning/ref/activations.html?searchHighlight=activations&s_tid=doc_srchtitle
I swang between using MATLAB and Python (Keras) for deep learning for a couple of weeks, eventually I chose the latter, albeit I am a long-term and loyal user to MATLAB and a rookie to Python. My two cents are that there are too many restrictions in the former regarding deep learning.
Good luck.:-)
If you 'simulation' means prediction/inference, simply use activations(...) to dump outputs from any intermediate (hidden) layers as I mentioned earlier so that you can check them.
Another way is that you construct an identical network but with the encoding part only, copy your trained parameters into it, and feed your simulated signals.
I have some questions about making tiff/box files for tesseract 4.
In TrainingTesseract 4.00 document written:
Making Box Files As with base Tesseract, there is a choice between
rendering synthetic training data from fonts, or labeling some
pre-existing images (like ancient manuscripts for example).
But it did not explain how to train with pre-existing images.
I want to train for the Persian language in tesseract 4 (lstm). I have some images from ancient manuscripts and want to train with images and texts instead of font. So I can’t use text2image command. I know that the old format box files will not work for LSTM training.
How can I make tif/box for tessearct 4 lstm then label them and
how to change tesseract commands?
Should I use other tools for generating box files (Given that Persian
language is right to left )?
Should I use fine tuning or train from Scratch?
I was struggling just like you, until I found this github repository:
https://github.com/OCR-D/ocrd-train
It will make your life super easy. All you need to do is to put your images in tif format and your text should have the same image name with extension .gt.txt. It will take care of all the rest for you. (you might need to update the Makefile according to your local machine)
Whether to train from scratch or fine-tune depends on your own language, data and the problem you are trying to solve. For me the fine tunining is what I need cause I am happy with the current performance but need to add upon it.
All the useful details you might need can be found in this answer
1) Use below command to make lstmbox:
tesseract test.tif test-lstmbox -l eng --psm 6 lstmbox
It will make a lstmbox for you but you have to correct the character in box file.
2) You require enough data for training from Scratch So I suggest fine tuning is better option.
We are quite new to caffe, but what we have seen so far, looks really promising.
After reading a few papers (1,2), we wanted to reproduce the result of 1, specifically about a segmentation challenge 4.
We downloaded the modified caffe from 3 and were able to execute it, just to see, that the trained network didn't work with the dataset from 4.
At first we thought that the network needs to be trained for the specific problem.
Which lead to the problem of how to do 'image-to-image (aka end-to-end) learning ' (4, training data).
This lead us to 'holistically nested edge detection' (hed, 2), where image-to-image learning, seems to be used.
With hed, we were able to retrain the network on our own. But it doesn't work (it leads to all 0 or 0.5 images - black images :-( ) if we try to train the network for the dataset of 4. For initialization we wrote a script to calculate the mean-map witch we use for the dataset of 4.
Our question(s) are:
How can we reproduce the result, mentioned in 1 by running
image-to-image training?
or
How do you train networks, where we have image-to-image learning?
Since we only have 30 image-to-image pairs, should we implement
deformation as mentioned in 1/3 via matlab/python or is there a
functionality within caffe already?
Are we missing something simple from 1 or 2?
Kind regards,
Klaus and Bernhard
Ps: We asked the same question at the caffe-user group and intend to post solutions at both locations.
After some time, and trying several different things out - i stumbled upon:
https://github.com/naibaf7
Using that caffe fork, with caffe_neural_models and caffe_neural_tool training image(raw)-to-image(labels) can be done quite simple.
Just check out 'caffe_neural_models/net*' for different configurations.
I saw different articles speaking about OCR form recognition (data extraction) and they said that they used Neural Network in order to do form recognition, so what's the relation between Artificial Neural network (ANN) and form recognition? If I want to extract fields from a BusinessCard, is it required to use ANN or is it optional? In other words when do I need to use ANN and when I don't?
It's a little different. ANN is just an "expert" in all OCR. But OCR engines contain many experts. When you study ANN you will build a simple OCR engine using just ANN but this does not compare to modern engines that use this in conjunction with tri-grams, morphology, data types ( very important for BCR and Forms ), dictionaries, connected components algorithm, etc. So look at it as just one of the tools in the bag of tricks to extract quality results. A good engine will incorporate ANN and all the others. In BCR there are additional considerations and it should be very heavy on connected components, dictionaries first, then use ANN and pattern matching for the actually recognition.
ANN is one way to perform OCR. There are others. Hence if you want to extract fields from a BusinessCard using ANN is only optional.
Good question. I recently spent some time playing with OCRopus, a Google project that does OCR - you can get it for free and play with it yourself. I'm pretty sure that it has an ANN as one of the modules behind it. However, the whole process of Optical Character Recognition can have many steps (lots of different little modules that each do something and pass the results to the next module).
So, here are some of the things I remember as being done by modules in that project:
There was a module that turned the image into black and white - this makes it easier for later modules to deal with.
Getting rid of speckles / spackles.
Straightening out the lines of text.
Breaking lines of text into individual words (it's been a few weeks, not sure about this one)
Basically, you can do the above using little bits of code that don't involve a neural net. So it's simpler doing it with these little bits of code.
The neural net I think is used just to recognize the individual characters - which character of a group of possible characters is it.
There's a training command in the OCRopus that I had running for over a week on end, and it kept sending line samples to the map, slowly changing the map as it went. I think it was training the ANN part.
Im new to matlab, and needs to simulate a scenario, in which a mobile communicates with two base stations at the same time. I need to compare with the performance of communicating with just one base station. can someone please please tell me how to do this using Matlab?
Thank you
You can use the spatial channel model (SCM) implementation:
[1] “Spatial channel model for multiple input multiple output (MIMO) simulations”, 3GPP TR 25.996 V6.1.0, Sep. 2003. [Online]. Available: http://www.3gpp.org/ftp/Specs/html-info/25996.htm
Most of the code is in Matlab, but some computationally-intensive parts are also written in C to speed up the simulations.