Neural networks in Lisp - advice - lisp

Can anybody suggest a good tutorial or book for neural networks in Lisp, or a blog, or share some code sample?
I have experience with neural netowrks in the imperative languages C++, Java, C#, but I want to try it in Lisp.

The seminal book AI: a modern approach includes LISP source code on the website: link
Specifically, check out the Learning chapter (perceptron etc)
In the same vein you have Paradigms of AI in Lisp, but it doesn't really touch neural networks if I remember correctly.

While the question is old and my answer is late, I still think it's valuable.
Recently I was looking for some resources on Machine Learning in Common Lisp(hence why I found this question). After doing some more research, I've found this codebase. It contains many interesting things, such as Boltzmann Machines, feed-forward and recurrent backprop neural networks. The author also has other libraries, such as evolutionary algorithms. This code is sure a good way to start.

Yann LeCun, my advisor at NYU, wrote an object-oriented dialect of lisp called Lush while he worked at Bell Labs. It feels like a lispy MATLAB, and is geared towards quick prototyping of numerical experiments and machine learning research. It installs easily if you're using Linux or Mac OS. During the late 90's a good fraction of all checks in the US were being read by the LeNet-5 net that he wrote in Lush.
We use it for most of our research, since it has so much support for convolutional neural networks, linear algebra, and has an easy C/C++ FFI for everything else. It also comes with demo code for implementing neural nets and convolutional networks for image and character classification, which is probably where you'd want to start.
It's in the Ubuntu repositories, but you probably want the latest version from here:
http://lush.sourceforge.net/

Searching on google I found these
book: "Common LISP Modules Artificial Intelligence" (at amazon)
Same at Google Books
library for Fast Artificial Neural Network
And this blog have some posts about ANN

Related

Are all neural networks today simulated?

Are all modern neural networks simulated or are there physical hardware versions of them in use yet? For example: memristor technology.
Actually, a Google search revealed these:
http://phys.org/news/2016-01-scientists-neural-network-plastic-memristors.html
http://arstechnica.com/science/2015/05/neural-network-chip-built-using-memristors/
So the answer to your question is that physical neural networks using memristors have been built, but I doubt that these examples count as "in use". There is also a good chance that various organizations (e.g. military, commercial) are working on this quietly, and may even have something "in use" behind closed doors.
Historically: this Wikipedia page mentions ADALINE which was a physical implementation of neural networks implemented in the 1960s. It was built using electro-chemical memistors (sic), but the technology did not scale and was abandoned.
The word for what you're looking for is probably "neuromorphic". The wikipedia article
https://en.wikipedia.org/wiki/Neuromorphic_engineering has a pretty good overview of R&D in the field. See also: https://en.wikipedia.org/wiki/Physical_neural_network
IBM TrueNorth is likely the closest to a commercial system using this today.

basic and fundamentals on intrusion detection system using neural network

I will take my graduation project next semester , I decide to complete my high degrees studying ,because I'm from low-income people , I want to dive on anything that helps me to do a paper or research or something supports my situation to gain scholarship.
my supervisor suggests that intrusion detection system using neural network is suitable for me , and he will help me , but I need to know fundamentals on this field .
there is limited resources on this topic , just thesis , papers and researches talk about only overview on IDs using neural network .
can anyone provides me some resources and references introduce me to
intrusion detection system using neural network to learn the fundamentals and basic ?
First, some background; Neural nets are by design black box. It is less important to understand the problem you are solving when designing a neural network than it is when writing a deterministic algorithm to solve it directly. With that in mind, you probably don't need to learn about "intrusion detection systems using neural networks", but would probably benefit more from learning about neural networks and intrusion systems separately.
I will leave it to you to find texts on intrusion detection systems, but would recommend reading the following to get started on what neural networks are, and how they work:
Neural Networks - A Systematic Introduction
If you think you have understood the basis of neural networks conceptually, you will want to learn a programming language. Your options diverge somewhat at this point, but I would suggest that if you want to learn neural nets from an academic perspective and want to have more control over the design and guts of the program, you would probably benefit most from learning C++. There is a wealth of knowledge on the topic of learning C++ online. In fact, probably the most popular page on this website is dedicated to that topic:
The Definitive C++ Book Guide and List
Once you understand neural network fundamentals and C++, the world is your oyster! If you're feeling adventurous, have a look at Kenneth Stanley's NEAT algorithm. The source code will teach you a lot about neural net algorithms.
From here to creating a learning machine that understands intrusion attempts is almost trivial from a programming perspective. You really just need to get the data, which may be really easy or really hard, but your supervisor should be able to help you find data sources on which to train the network once you reach this point.
Good luck!

consultation about ANN libraries

Firstly, I am a beginner in artificial neural networks and I need a special library for training the artificial neural networks, but I very confused in the selection of the library, and since I didn't have the experience I wanted to consult you.
I have read about three libraries:
FANN, Flood, and Neuro Fusion libraries.
So, what are you think about the easiest and Least problems library for using it with VC++.6?
I just started using FANN, and it seems to be very well documented, with great examples and fast.
It operates with floats/doubles/integers and implements the Cascade2 training method, which is really great if you are unsure about the architecture of your NN.
It is not as rich as Encog (didn't use it), but if FANN implements all the functionalities you need, I think you should go with it.
Edit: I just realized that Encog is only available for .NET C# (besides Java)

Neural Networks for Pattern Recognition

Hey guys, Am wondering if anybody can help me with a starting point for the design of a Neural Network system that can recognize visual patterns, e.g. checked, and strippes. I have knowledge of the theory, but little practical knowledge. And net searches are give me an information overload. Can anybody recommend a good book or tutorial that is more focus on the practical side.
Thank you!
Are you only trying to recognize patterns such as checkerboards and stripes? Do you have to use a neural network system?
Basically, you want to define a bunch of simple features on the board and use them as input to the learning system. It can often be easier to define a lot of binary features and feed them into a single-layer network (what can become essentially linear regression).
Look at how neural networks were used for learning to play backgammon (http://www.research.ibm.com/massive/tdl.html), as this will help give you a sense of the types of features that make learning with a neural network work well.
As suggested above, you probably want to reduce your image a set of features. A corner detector (perhaps the Harris method) could be used to determine features in the checkerboard pattern. Likewise, an edge detector (perhaps Canny) could be used in the stripes case. As mentioned above, the Hough transform is a good line detection method.
MATLAB's image processing toolbox contains these methods, so you might try those for rapid prototyping. OpenCV is an open-source computer vision library that also provides these tools (and many others).

What are some good resources for learning about Artificial Neural Networks? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm really interested in Artificial Neural Networks, but I'm looking for a place to start.
What resources are out there and what is a good starting project?
First of all, give up any notions that artificial neural networks have anything to do with the brain but for a passing similarity to networks of biological neurons. Learning biology won't help you effectively apply neural networks; learning linear algebra, calculus, and probability theory will. You should at the very least make yourself familiar with the idea of basic differentiation of functions, the chain rule, partial derivatives (the gradient, the Jacobian and the Hessian), and understanding matrix multiplication and diagonalization.
Really what you are doing when you train a network is optimizing a large, multidimensional function (minimizing your error measure with respect to each of the weights in the network), and so an investigation of techniques for nonlinear numerical optimization may prove instructive. This is a widely studied problem with a large base of literature outside of neural networks, and there are plenty of lecture notes in numerical optimization available on the web. To start, most people use simple gradient descent, but this can be much slower and less effective than more nuanced methods like
Once you've got the basic ideas down you can start to experiment with different "squashing" functions in your hidden layer, adding various kinds of regularization, and various tweaks to make learning go faster. See this paper for a comprehensive list of "best practices".
One of the best books on the subject is Chris Bishop's Neural Networks for Pattern Recognition. It's fairly old by this stage but is still an excellent resource, and you can often find used copies online for about $30. The neural network chapter in his newer book, Pattern Recognition and Machine Learning, is also quite comprehensive. For a particularly good implementation-centric tutorial, see this one on CodeProject.com which implements a clever sort of network called a convolutional network, which constrains connectivity in such a way as to make it very good at learning to classify visual patterns.
Support vector machines and other kernel methods have become quite popular because you can apply them without knowing what the hell you're doing and often get acceptable results. Neural networks, on the other hand, are huge optimization problems which require careful tuning, although they're still preferable for lots of problems, particularly large scale problems in domains like computer vision.
I'd highly recommend this excellent series by Anoop Madhusudanan on Code Project.
He takes you through the fundamentals to understanding how they work in an easy to understand way and shows you how to use his brainnet library to create your own.
Here are some example of Neural Net programming.
http://www.codeproject.com/KB/recipes/neural_dot_net.aspx
you can start reading here:
http://web.archive.org/web/20071025010456/http://www.geocities.com/CapeCanaveral/Lab/3765/neural.html
I for my part have visited a course about it and worked through some literature.
Neural Networks are kind of declasse these days. Support vector machines and kernel methods are better for more classes of problems then backpropagation. Neural networks and genetic algorithms capture the imagination of people who don't know much about modern machine learning but they are not state of the art.
If you want to learn more about AI and machine learning, I recommend reading Peter Norvig's Artificial Intelligence: A Modern Approach. It's a broad survey of AI and lots of modern technology. It goes over the history and older techniques too, and will give you a more complete grounding in the basics of AI and machine Learning.
Neural networks are pretty easy, though. Especially if you use a genetic algorithm to determine the weights, rather then proper backpropagation.
I second dwf's recommendation of Neural Networks for Pattern Recognition by Chris Bishop. Although, it's perhaps not a starter text. Norvig or an online tutorial (with code in Matlab!) would probably be a gentler introduction.
A good starter project would be OCR (Optical Character Recognition). You can scan in pages of text and feed each character through the network in order to perform classification. (You would have to train the network first of course!).
Raul Rojas' book is a a very good start (it's also free). Also, Haykin's book 3rd edition, although of large volume, is very well explained.
I can recommend where not to start. I bought An Introduction to Neural Networks by Kevin Gurney which has good reviews on Amazon and claims to be a "highly accessible introduction to one of the most important topics in cognitive and computer science". Personally, I would not recommend this book as a start. I can comprehend only about 10% of it, but maybe it's just me (English is not my native language). I'm going to look into other options from this thread.
http://www.ai-junkie.com/ann/evolved/nnt1.html is a clear introduction to multi-layers perceptron, although it does not describe the backpropagation algorithm
you can also have a look at generation5.org which provides a lot of articles about AI in general and has some great texts about neural network
If you don't mind spending money, The Handbook of Brain Theory and Neural Networks is very good. It contains 287 articles covering research in many disciplines. It starts with an introduction and theory and then highlights paths through the articles to best cover your interests.
As for a first project, Kohonen maps are interesting for categorization: find hidden relationships in your music collection, build a smart robot, or solve the Netflix prize.
I think a good starting point would always be Wikipedia. There you'll find some usefull links to documentations and projects which use neural nets, too.
Two books that where used during my study:
Introductional course: An introduction to Neural Computing by Igor Aleksander and Helen Morton.
Advanced course: Neurocomputing by Robert Hecht-Nielsen
I found Fausett's Fundamentals of Neural Networks a straightforward and easy-to-get-into introductory textbook.
I found the textbook "Computational Intelligence" to be incredibly helpful.
Programming Collective Intelligence discusses this in the context of Search and Ranking algorithms. Also, in the code available here (in ch.4), the concepts discussed in the book are illustrated in a Python example.
I agree with the other people who said that studying biology is not a good starting point... because theres a lot of irrelevant info in biology. You do not need to understand how a neuron works to recreate its functionality - you only need to simulate its actions. I recomend "How To Create A Mind" by Ray Kurzweil - it goes into the aspect of biology that is relevant for computational models, (creating a simualted neuron by combining several inputs and firing once a threshhold is reached) but ignores the irrelvant stuff like how the neuron actually adds thouse inputs togeather. (You will just use + and an inequality to compare to a threshold, for example)
I should also point out that the book isn't really about 'creating a mind' - it only focuses on heirarchical pattern recognition / the neocortex. The general theme has been talked about since the 1980s I beleive, so there are plenty of older books that probably contain slightly dated forms of the same information. I have read older documents stating that the vision system, for example, is a multi layered pattern recognizer. He contends that this applies to the entire neocortex. Also, take his 'predictions' with a grain of salt - his hardware estimates are probably pretty accurate, but i think he underestimates how complicated simple tasks can be (ex: driving a car). Granted, he has seen a lot of progress (and been part of some of it) but i still think he is over optimistic. There is a big difference between an AI car being able to drive a mile successfully 90% of the time, when compared to the 99.9+% that a human can do. I don't expect any AI to be truly out driving me for atleast 20 years... (I don't count BMWs track cars that need to be 'trained' on the actual course, as they aren't really playing the same game)
If you already have a basic idea of what AI is and how it can be modeled, you may be better off skipping to something more technical.
If you want to do quickly learn about applications of some neural network concepts on a real simulator, there is a great online book (now wiki) called 'Computational Cognitive Neuroscience' at http://grey.colorado.edu/CompCogNeuro/index.php/CCNBook/Main
The book is used at schools as a textbook, and takes you through lots of different brain areas, from individual neurons all the way to higher-order executive functioning.
In addition, each section is augmented with homework 'projects' that are already down for you. Just download, follow the steps, and simulate everything that the chapter talked about. The software they use, Emergent, is a little finnicky but incredibly robust: its the product of more than 10 years of work I believe.
I went through it in an undergrad class this past semester, and it was great. Walks you through everything step by step