What would be best way to detect difference between 2 images, one image is taken at beginning of process and the other at the end, goal is to detect if there is any difference in the images.
Based on my research neural networks seems good for this type of problem, but I don't have experience using them, I am not sure should this problem be treated as classification or anomaly detection? Also if you have any useful literature/GitHub projects/papers to share I would be thankful.
I will take my graduation project next semester , I decide to complete my high degrees studying ,because I'm from low-income people , I want to dive on anything that helps me to do a paper or research or something supports my situation to gain scholarship.
my supervisor suggests that intrusion detection system using neural network is suitable for me , and he will help me , but I need to know fundamentals on this field .
there is limited resources on this topic , just thesis , papers and researches talk about only overview on IDs using neural network .
can anyone provides me some resources and references introduce me to
intrusion detection system using neural network to learn the fundamentals and basic ?
First, some background; Neural nets are by design black box. It is less important to understand the problem you are solving when designing a neural network than it is when writing a deterministic algorithm to solve it directly. With that in mind, you probably don't need to learn about "intrusion detection systems using neural networks", but would probably benefit more from learning about neural networks and intrusion systems separately.
I will leave it to you to find texts on intrusion detection systems, but would recommend reading the following to get started on what neural networks are, and how they work:
Neural Networks - A Systematic Introduction
If you think you have understood the basis of neural networks conceptually, you will want to learn a programming language. Your options diverge somewhat at this point, but I would suggest that if you want to learn neural nets from an academic perspective and want to have more control over the design and guts of the program, you would probably benefit most from learning C++. There is a wealth of knowledge on the topic of learning C++ online. In fact, probably the most popular page on this website is dedicated to that topic:
The Definitive C++ Book Guide and List
Once you understand neural network fundamentals and C++, the world is your oyster! If you're feeling adventurous, have a look at Kenneth Stanley's NEAT algorithm. The source code will teach you a lot about neural net algorithms.
From here to creating a learning machine that understands intrusion attempts is almost trivial from a programming perspective. You really just need to get the data, which may be really easy or really hard, but your supervisor should be able to help you find data sources on which to train the network once you reach this point.
Good luck!
I'm looking for good library for camera calibration, I'm aware of Camera Calibration Toolbox for Matlab and OpenCV. The problem with the toolbox is that it is in Matlab and not very friendly for modifications. OpenCV on the other hand seems to be less precise (see Suriansky).
So are there any alternatives?
The paper you cite is rubbish: whoever wrote it did not bother to actually read the code.
The Matlab toolbox uses exactly the same calibration algorithms as the OpenCV code: Zhang's for the initial estimation, followed by a round of bundle adjustment. The reason they are very similar is that the author of the original implementation of the Matlab toolbox worked for a while with the Intel team that produced the calibration code in the very first release of OpenCV.
Any differences among the results they produce are most likely due to different configurations of the control parameters.
I don't understand what you mean by "not very friendly for modification". If you have Matlab, and your application can use it (it's slow), J.Y. Bouguet's code is quite easy to read and modify. On the other hand, I always found the OpenCV codebase somewhat annoyingly low-level (but understandably so, given the stress on performance).
One alternative is the camera calibration functionality in the Computer Vision System Toolbox for MATLAB. Specifically, check out the Camera Calibrator and the Stereo Camera Calibrator apps.
As mentioned by Francesco, both Matlab and OpenCV use the Zhang method. As 2022, OpenCV offers a larger range of distortion parameters thus offering more precision in the pose estimation results. However, all parameters might not be required at once. More details on these parameters are provided in the documentation:
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga3207604e4b1a1758aa66acb6ed5aa65d
As an alternative solution to both OpenCV and Matlab, I would definitely recommend CalibPro, a web platform that allows you to upload your data and get your calibration parameters in a matter of minutes without a single line of code. CalibPro is fully compatible with OpenCV camera parameters. The platform is rapidly developing and will provide more than the pinhole camera model in the near future.
[Disclaimer]: I am the founder of CalibPro. I am happy to take any feedback on our platform or help people with their calibration.
Hi I have been searching though research papers on what features would be good for me to use in my handwritten OCR classifying neural network. I am a beginner so I have been just taking the image of the handwritten character, made a bounding box around it, and then resize it into a 15x20 binary image. So this means i have an input layer of 300 features. From the papers i have found on google (most of which are quite old) the methods really vary. My accuracy is not bad with just a binary grid of the image, but I was wondering if anyone had other features I could use to boost my accuracy. Or even just pointing me in the right direction. I would really appreciate it!
Thanks,
Zach
I haven't read any actual papers on this topic, but my advice would be to get creative. Use anything you could think of that might help the classifier identify numbers.
My first thought would be to try and identify "lines" in the image, maybe via a modified "sliding window" algorithm (sliding/rotating line?), or to try and identify a "line of best fit" to the image (to help the classifier respond to changes in italicism or writing style). Really though, if you're using a neural network, it should be picking up on these sorts of things without your manual help (that's the whole point of them!)
I would focus first on the structure and topology of your net to try and improve performance, and worry about additional features only if you cannot get satisfactory performance some other way. Also you could try improving the features you already have, make sure the character is centered in the image, maybe try an algorithm to skew italicised characters to make them vertical?
In my experience these sorts of things don't often help, but you could get lucky and run into one that improves your net :)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm really interested in Artificial Neural Networks, but I'm looking for a place to start.
What resources are out there and what is a good starting project?
First of all, give up any notions that artificial neural networks have anything to do with the brain but for a passing similarity to networks of biological neurons. Learning biology won't help you effectively apply neural networks; learning linear algebra, calculus, and probability theory will. You should at the very least make yourself familiar with the idea of basic differentiation of functions, the chain rule, partial derivatives (the gradient, the Jacobian and the Hessian), and understanding matrix multiplication and diagonalization.
Really what you are doing when you train a network is optimizing a large, multidimensional function (minimizing your error measure with respect to each of the weights in the network), and so an investigation of techniques for nonlinear numerical optimization may prove instructive. This is a widely studied problem with a large base of literature outside of neural networks, and there are plenty of lecture notes in numerical optimization available on the web. To start, most people use simple gradient descent, but this can be much slower and less effective than more nuanced methods like
Once you've got the basic ideas down you can start to experiment with different "squashing" functions in your hidden layer, adding various kinds of regularization, and various tweaks to make learning go faster. See this paper for a comprehensive list of "best practices".
One of the best books on the subject is Chris Bishop's Neural Networks for Pattern Recognition. It's fairly old by this stage but is still an excellent resource, and you can often find used copies online for about $30. The neural network chapter in his newer book, Pattern Recognition and Machine Learning, is also quite comprehensive. For a particularly good implementation-centric tutorial, see this one on CodeProject.com which implements a clever sort of network called a convolutional network, which constrains connectivity in such a way as to make it very good at learning to classify visual patterns.
Support vector machines and other kernel methods have become quite popular because you can apply them without knowing what the hell you're doing and often get acceptable results. Neural networks, on the other hand, are huge optimization problems which require careful tuning, although they're still preferable for lots of problems, particularly large scale problems in domains like computer vision.
I'd highly recommend this excellent series by Anoop Madhusudanan on Code Project.
He takes you through the fundamentals to understanding how they work in an easy to understand way and shows you how to use his brainnet library to create your own.
Here are some example of Neural Net programming.
http://www.codeproject.com/KB/recipes/neural_dot_net.aspx
you can start reading here:
http://web.archive.org/web/20071025010456/http://www.geocities.com/CapeCanaveral/Lab/3765/neural.html
I for my part have visited a course about it and worked through some literature.
Neural Networks are kind of declasse these days. Support vector machines and kernel methods are better for more classes of problems then backpropagation. Neural networks and genetic algorithms capture the imagination of people who don't know much about modern machine learning but they are not state of the art.
If you want to learn more about AI and machine learning, I recommend reading Peter Norvig's Artificial Intelligence: A Modern Approach. It's a broad survey of AI and lots of modern technology. It goes over the history and older techniques too, and will give you a more complete grounding in the basics of AI and machine Learning.
Neural networks are pretty easy, though. Especially if you use a genetic algorithm to determine the weights, rather then proper backpropagation.
I second dwf's recommendation of Neural Networks for Pattern Recognition by Chris Bishop. Although, it's perhaps not a starter text. Norvig or an online tutorial (with code in Matlab!) would probably be a gentler introduction.
A good starter project would be OCR (Optical Character Recognition). You can scan in pages of text and feed each character through the network in order to perform classification. (You would have to train the network first of course!).
Raul Rojas' book is a a very good start (it's also free). Also, Haykin's book 3rd edition, although of large volume, is very well explained.
I can recommend where not to start. I bought An Introduction to Neural Networks by Kevin Gurney which has good reviews on Amazon and claims to be a "highly accessible introduction to one of the most important topics in cognitive and computer science". Personally, I would not recommend this book as a start. I can comprehend only about 10% of it, but maybe it's just me (English is not my native language). I'm going to look into other options from this thread.
http://www.ai-junkie.com/ann/evolved/nnt1.html is a clear introduction to multi-layers perceptron, although it does not describe the backpropagation algorithm
you can also have a look at generation5.org which provides a lot of articles about AI in general and has some great texts about neural network
If you don't mind spending money, The Handbook of Brain Theory and Neural Networks is very good. It contains 287 articles covering research in many disciplines. It starts with an introduction and theory and then highlights paths through the articles to best cover your interests.
As for a first project, Kohonen maps are interesting for categorization: find hidden relationships in your music collection, build a smart robot, or solve the Netflix prize.
I think a good starting point would always be Wikipedia. There you'll find some usefull links to documentations and projects which use neural nets, too.
Two books that where used during my study:
Introductional course: An introduction to Neural Computing by Igor Aleksander and Helen Morton.
Advanced course: Neurocomputing by Robert Hecht-Nielsen
I found Fausett's Fundamentals of Neural Networks a straightforward and easy-to-get-into introductory textbook.
I found the textbook "Computational Intelligence" to be incredibly helpful.
Programming Collective Intelligence discusses this in the context of Search and Ranking algorithms. Also, in the code available here (in ch.4), the concepts discussed in the book are illustrated in a Python example.
I agree with the other people who said that studying biology is not a good starting point... because theres a lot of irrelevant info in biology. You do not need to understand how a neuron works to recreate its functionality - you only need to simulate its actions. I recomend "How To Create A Mind" by Ray Kurzweil - it goes into the aspect of biology that is relevant for computational models, (creating a simualted neuron by combining several inputs and firing once a threshhold is reached) but ignores the irrelvant stuff like how the neuron actually adds thouse inputs togeather. (You will just use + and an inequality to compare to a threshold, for example)
I should also point out that the book isn't really about 'creating a mind' - it only focuses on heirarchical pattern recognition / the neocortex. The general theme has been talked about since the 1980s I beleive, so there are plenty of older books that probably contain slightly dated forms of the same information. I have read older documents stating that the vision system, for example, is a multi layered pattern recognizer. He contends that this applies to the entire neocortex. Also, take his 'predictions' with a grain of salt - his hardware estimates are probably pretty accurate, but i think he underestimates how complicated simple tasks can be (ex: driving a car). Granted, he has seen a lot of progress (and been part of some of it) but i still think he is over optimistic. There is a big difference between an AI car being able to drive a mile successfully 90% of the time, when compared to the 99.9+% that a human can do. I don't expect any AI to be truly out driving me for atleast 20 years... (I don't count BMWs track cars that need to be 'trained' on the actual course, as they aren't really playing the same game)
If you already have a basic idea of what AI is and how it can be modeled, you may be better off skipping to something more technical.
If you want to do quickly learn about applications of some neural network concepts on a real simulator, there is a great online book (now wiki) called 'Computational Cognitive Neuroscience' at http://grey.colorado.edu/CompCogNeuro/index.php/CCNBook/Main
The book is used at schools as a textbook, and takes you through lots of different brain areas, from individual neurons all the way to higher-order executive functioning.
In addition, each section is augmented with homework 'projects' that are already down for you. Just download, follow the steps, and simulate everything that the chapter talked about. The software they use, Emergent, is a little finnicky but incredibly robust: its the product of more than 10 years of work I believe.
I went through it in an undergrad class this past semester, and it was great. Walks you through everything step by step