I'm looking for handwritten mathematical operators Database like MNIST (the handwritten database for digits) to develop a new application using neural network.
For the moment, i found only one satisfactory source on Kaggle with 45x45 jpep images extracted from CROHME.
Are there any other (better) sources known of mathematical symbols ?
You can take a look at this dataset HAMEX, though it was also one of the datasets partially merged to form the dataset for CROHME.
You can find more information and other datasets here.
Related
I have a dilemma. If you have only one type of invoice/document, and you have a specific field that you want to process from that invoice and use somewhere else (that filed happens to be a handwritten digit, sometimes written with dashes or slashes), would you use some OCR software or build your own CNN for recognizing the digits? What accuracy would you expect from OCR? Would your CNN be more accurate, as you are just interested in a specific type of digit writing, with specific image dimensions, etc. What would be better in the given situation?
Keep in mind, that you would not use it in any other way, or any other place for handwritten digits recognition, and you already have up to 100k and more documents that are copied to a computer by a human, and you can use it for training and testing.
Thank you.
I would definitely go for a CNN based solution. Since the structure of your document is consistent:
Extract the desired portion of the document with a standard computer vision approach
Train a CNN on an annotated set of a few thousand documents. You should even be able to finetune an existing CNN trained on MNIST and this would require less training images.
This approach should give you >99% accuracy without much effort. The accuracy of the OCR solution really depends on which library you use and the preprocessing you implement.
I'm not looking for a chunk of code as a solution, just the name of the model I'd need to implement or some links would be nice.
My problem is I have a dataset I've made of a few hundred 128x128 images (abstract paintings) - I'd like to simply generate more images similar to these images using a neural network (preferably no input needed for the network, except maybe random values?), but it's unclear as to how I'd go about this.
One solution I've thought about but haven't tried out yet is making an LSTM neural network, turning the paintings into 1D arrays of pixel values, and feeding the arrays to the network (LSTM networks are real good at learning sequences) - but if I'd want to work with larger images, this might not be very practical.
Any info is greatly appreciated. Thanks!
GANs (generative adversarial networks) would be appropriate in this case. GANs rely on two separate neural networks and, when properly trained, can be used to generate new images (a process known as hallucinating) that are similar to a collection of known images.
there are many examples of using GANs to generate new images of numbers from the canonical mnist dataset. naturally, you can replace mnist with your abstract paintings.
I want to costruct a neural network which will be trained based on data i create. My question is what form these data should have? In other words does keras allow neural networks that take strings/characters as input? If not, and only is able to accept numbers in what range should the input/output be?
The only condition for your input data i.e features, is that it should be numerical. There isn't really any constraint on range but it's always a good idea to do Feature Scaling, Normalization etc to make sure that our model won't get confused. Neural Networks or other machine learning methods cannot accept string (characters, words) directly, therefore, you need to first convert string to numbers. There are many ways to do that, most common techniques include Bag of Words, tf-idf features, word embeddings etc.
Following tutorials (using scikit) might be a good starting point:
http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words
I am going to do a neural network project about handwritten digits recognition but this area is well-studied. I have found some essays online but most of them are before 2012. Could anyone tell me what is the state-of-the-art technology and current issue of this area ?
SO you can refer MNIST DATASET because it is made up of a huge number of corner cases and it contains 28*28 pixel images with 42000 rows and 786 cols so that you can easily get an idea
how much data you want to train and for predictions without needing the train_test_split.
So that you can get a proper idea of the dataset and you can Visualize it.
dataset link:- MNIST DATASET
refer video:- HANDWRITTEN DIGIT RECOGNITION
The state of the art is almost always determined as the result of performance against a particular dataset. For handwritten digits, the MNIST dataset is the one I have seen most commonly referenced (though I'm not an expert in the area). For any dataset you should be able to easily find the state of the art performance against it very near where you download it.
I am doing my project based on health care.I am going to train my autoencoders with the symptoms and the diseases i.e my input is in textual form. Will that work? (I am using Rstudio).Please anyone help me with this
You have to convert the text to vectors/numbers. To do this traditional approaches like Bag of words, Tf-Idf will help but the latest Neural Word Embedding like Word2Vec, RNN Language model etc are the best techniques to obtain numeric representation of text.
Please use any Neural Word Embedding technique and convert the text(word level[word2vec], document level[doc2vec]) into numbers/vectors.
Now these vectors come with some dimension and to compress this representation to even smaller dimension u can use AutoEncoder.
Feel Free to ask any other information required.
Try using Python for these tasks as it has the latest packages.
You can use Autoencoder on Textual data as explained here.
Autoencoder usually worked better on image data but recent approaches changed the autoencoder in a way it is also good on the text data.
have a look at this.
the code is also available in GitHub.