Google Coral Dev board vs Asus Tinker Edge T - tpu

Does this:
https://www.asus.com/gr/AIoT-Industrial-Solutions/Tinker-Edge-T/
have any difference with this:
https://coral.ai/products/dev-board/
?? For example can I train a custom CNN? Has anyone experience to give an advice? I want to buy a SBC with TPU in order to train a CNN faster but the information I found on google do not help...

The asus Tinker Edge T actually have the same edgetpu that is on the dev board. I believe the SOM is exactly the same and only the base board are different.
The edgetpu are targeted for inferencing only, not training, so you probably won't profit from it. However, you can take advantage of fast inference time to do things like backprop which allows you to do some transfer learning:
https://coral.ai/docs/edgetpu/retrain-classification-ondevice-backprop/

Related

Evaluate the impact of one of the features on the target feature with a neural network

Using a neural network (LSTM) I'm making an objective features prediction (I have multiple features). Good accuracy is observed during operation. But can I show the effect of any feature on the target feature?
For example, I am predicting a user purchase on a site and want to estimate the impact of a discount on a purchase. That is, to show that such and such a discount will or will not be such a purchase. How to do this using an existing neural network for prediction?
Neural Networks are black box models, they do not provide explanation for their decisions. Answer to such a question is essentially a whole research area.
There are indeed techniques to get insights like that for a already trained model.
You can start with
Partial Dependence Plots (will show you how a change in your feature's value will effect the target variable). A great explanation can be found here: https://www.kaggle.com/dansbecker/partial-dependence-plots
Permutation Importance (will show you how important the feature is for the model to make its prediction). A great explanation can be found here: https://www.kaggle.com/dansbecker/permutation-importance

Neural Networks for Pattern Recognition

Hey guys, Am wondering if anybody can help me with a starting point for the design of a Neural Network system that can recognize visual patterns, e.g. checked, and strippes. I have knowledge of the theory, but little practical knowledge. And net searches are give me an information overload. Can anybody recommend a good book or tutorial that is more focus on the practical side.
Thank you!
Are you only trying to recognize patterns such as checkerboards and stripes? Do you have to use a neural network system?
Basically, you want to define a bunch of simple features on the board and use them as input to the learning system. It can often be easier to define a lot of binary features and feed them into a single-layer network (what can become essentially linear regression).
Look at how neural networks were used for learning to play backgammon (http://www.research.ibm.com/massive/tdl.html), as this will help give you a sense of the types of features that make learning with a neural network work well.
As suggested above, you probably want to reduce your image a set of features. A corner detector (perhaps the Harris method) could be used to determine features in the checkerboard pattern. Likewise, an edge detector (perhaps Canny) could be used in the stripes case. As mentioned above, the Hough transform is a good line detection method.
MATLAB's image processing toolbox contains these methods, so you might try those for rapid prototyping. OpenCV is an open-source computer vision library that also provides these tools (and many others).

How to use neural networks to solve "soft" solutions?

I'm considering using a neural network to power my enemies in a space shooter game i'm building and i'm wondering; how do you train neural networks when there is no one definitive good set of outputs for the network?
I'm studying neural networks at the moment, and they seem quite useless without well defined input and output encodings, and they don't scale at all to complexity (see http://en.wikipedia.org/wiki/VC_dimension). that's why neural network research has had so little application since the initial hype more than 20-30 years ago while semantic/state based AI took over everyone's interests because of it's success in real world applications.
A so a good place to start might be to figure out how to numerically represent the state of the game as inputs for the neural net.
The next thing would be to figure out what kind of output would correspond to actions in the game.
think about the structure of neural network to use. To get interesting complex behavior from neural networks, the network almost has to be recurrent. You'll need a recurrent network because they have 'memory', but beyond that you don't have much else to go on. However, recurrent networks with any complex structure is really hard to train to behave.
The areas where neural networks have been successful tend to be classification (image, audio, grammar, etc) and limited success in statistical prediction (what word would we expect to come after this word, what will the stock price be tomorrow?)
In short, it's probably better for you to use Neural nets for a small portion of the game rather as the core enemy AI.
You can check out AI Dynamic game difficulty balancing for various AI techniques and references.
(IMO, you can implement enemy behaviors, like "surround the enemy", which will be really cool, without delving into advanced AI concepts)
Edit: since you're making a space shooter game and you want some kind of AI for your enemies, I believe you'll find interesting this link: Steering Behaviors For Autonomous Characters
Have you considered that it's easily possible to modify an FSM in response to stimulus? It is just a table of numbers after all, you can hold it in memory somewhere and change the numbers as you go. I wrote about it a bit in one of my blog fuelled deleriums, and it oddly got picked up by some Game AI news site. Then the guy who built a Ms. Pacman AI that could beat humans and got on the real news left a comment on my blog with a link to even more useful information
here's my blog post with my incoherant ramblings about some idea I had about using markov chains to continually adapt to a game environment, and perhaps overlay and combine something that the computer has learned about how the player reacts to game situations.
http://bustingseams.blogspot.com/2008/03/funny-obsessive-ideas.html
and here's the link to the awesome resource about reinforcement learning that mr. smarty mcpacman posted for me.
http://www.cs.ualberta.ca/%7Esutton/book/ebook/the-book.html
here's another cool link
http://aigamedev.com/open/architecture/online-adaptation-game-opponent/
These are not neural net approaches, but they do adapt and continually learn, and are probably better suited to games than neural networks.
I'll refer you to two of Matthew Buckland's books.
Programming Game AI by example
AI Techniques for Game Programming
The second book goes into back-propagation ANN, which is what most people mean when they
talk about NN anyway.
That said, I think the first book is more useful if you want to create meaningful game AI. There's a nice, meaty section on using FSM successfully (and yes, it's easy to trip yourself up with a FSM).

Water simulation with a grid

For a while I've been attempting to simulate flowing water with algorithms I've scavenged from "Real-Time Fluid Dynamics for Games". The trouble is I don't seem to get out water-like behavior with those algorithms.
Myself I guess I'm doing something wrong and that those algorithms aren't all suitable for water-like fluids.
What am I doing wrong with these algorithms? Are these algorithms correct at all?
I have the associated project in bitbucket repository. (requires gletools and newest pyglet to run)
Voxel-based solutions are fine for simulating liquids, and are frequently used in film.
Ron Fedkiw's website gives some academic examples - all of the ones there are based on a grid. That code underpins many of the simulations used by Pixar and ILM.
A good source is also Robert Bridson's Fluid Simulation course notes from SIGGRAPH and his website. He has a book "Fluid Simulation for Computer Graphics" that goes through developing a liquid simulator in detail.
The most specific answer I can give to your question is that Stam's real-time fluids for games is focused on smoke, ie. where there isn't a boundary between the fluid (water), and an external air region. Basically smoke and liquids use the same underlying mechanism, but for liquid you also need to track the position of the liquid surface, and apply appropriate boundary conditions on the surface.
Cem Yuksel presented a fantastic talk about his Wave Particles at SIGGRAPH 2007. They give a very realistic effect for quite a low cost. He was even able to simulate interaction with rigid bodies like boxes and boats. Another interesting aspect is that the boat motion isn't scripted, it's simulated via the propeller's interaction with the fluid.
(source: cemyuksel.com)
At the conference he said he was planning to release the source code, but I haven't seen anything yet. His website contains the full paper and the videos he showed at the conference.
Edit: Just saw your comment about wanting to simulate flowing liquids rather than rippling pools. This wouldn't be suitable for that, but I'll leave it here in case someone else finds it useful.
What type of water are you trying to simulate? Pools of water that ripple, or flowing liquids?
I don't think I've ever seen flowing water ever, except in rendered movies. Rippling water is fairly easy to do, this site usually crops up in this type of question.
Yeah, this type of voxel based solution only really work if your liquid is confined to very discrete and static boundaries.
For simulating flowing liquid, do some investigation into particles. Quite alot of progress has been made recently accelerating them on the GPU, and you can get some stunning results.
Take a look at, http://nzone.com/object/nzone_cascades_home.html as a great example of what can be achieved.

Robot simulation environments

I would like to make a list of remarkable robot simulation environments including advantages and disadvantages of them. Some examples I know of are Webots and Player/Stage.
ROS will visualize your robot and any data you've recorded from it.
Packages to check out would rviz and nav_view
This made me remember the breve project.
breve is a free, open-source software package which makes it easy to build 3D simulations of multi-agent systems and artificial life.
There is also a wikipage listing Robotics simulators
Microsoft Robotics Studio/Microsoft Robotics Developer Studio 2008
Also read this article on MSDN Magazine
It all depends on what you want to do with the simulation.
I do legged robot simulation, I am coming from a perspective that is different than mobile robotics, but...
If you are interested in dynamics, then the one of the oldest but most difficult to use is sd/fast. The company that originally made it was acquired by a large cad outfit.
You might try heading to : http://www.sdfast.com/
It will cost you a bit of money, but I trust the accuracy of the simulation. There is no contact or collision model, so you have to roll you own. I have used it to simulate bipeds, swimming fish, etc.. There is also no visualization. So, it is for the hardcore programmer. However, it is well respected among us old folk.
OpenDynamics engine is used by people http://www.ode.org/ for "easier" simulation. It comes with an integrator and a primitive visualization package. There are python binding (Hurray for python!).
The build in friction model.. is ... well not very well documented. And did not make sense. Also, the simulations can suddenly "fly apart" for no apparent reason. The simulations may or may not be accurate.
Now, MapleSoft (in beautiful Waterloo Canada) has come out with maplesim. It will set you back a bit of money but here is what I like about it:
It goes beyond just robotics. You can virtually anything. I am sure you can simulate the suspension system on a car, gears, engines... I think it even interfaces with electrical circuit simulation. So, if you are building a high performance product, than MapleSim is a strong contender. Goto www.maplesoft.com and search for it.
They are pretty nice about giving you an eval copy for 30 days.
Of course, you can go home brew. You can solve the Lagrange-Euler equations of motion for most simple robots using a symbolic computation program like maple or mathematica.
EDIT: Have not be able to elegantly do certain derivatives in Maple. I have to resort to a hack.
However, be aware of speed issue.
Finally for more biologically motivated work, you might want to look at opensim (not to be confused with OpenSimulator).
EDIT: OpenSim shares a team member with SD/Fast.
There a lots of other specialized simulators. But, beware.
In sum here are the evaluation criteria for a simulator for robot oriented work:
(1) What kind of collision model do you have ? If it is a very stiff elastic collision, you may have problem in numerical stability during collisions
(2) Visualization- Can you add different terrains, etc..
(3) Handy graphical building tools so you don't have to code then see-what-you-get.
Handling complex system (say a full scale humanoid) is hard to think about in your head.
(4) What is the complexity of the underlying simulation algorithm. If it is O(N) then that is great. But it could be O(N^4) as would be the case for a straight Lagrange-Euler derivation... then your system just will not scale no matter how fast your machine.
(5) How accurate is it and do you care?
(6) Does it help you integrate sensors. For mobile robots you need to have a "robot-eyes view"
(7) If it does visualization, can it you do things like automatically follow the object as it is moving or do you have to chase it around?
Hope that helps!
It's not as impressive looking as Webots, but RobotBasic is free, easy to learn, and useful for prototyping simple robot movement algorithms. You can also program a BasicStamp from the IDE.
I've been programming against SimSpark. It's the open-source simulation engine behind the RoboCup 3D Simulated Soccer League.
It's extensible for different simulations. You can plug in your own sensors, actuators and models using C++, Ruby and/or RSG (Ruby Scene Graph) files.
ABB has a quite a solution called RobotStudio for simulating their huge industrial robots. I don't think it's free and I don't guess you'll get much fun out of it but it's quite impressive. Here's a page about it
I have been working with Carmen http://carmen.sourceforge.net/ and find it useful.
One of the disadvantages with Carmen is the documentation with all respect I think the webpage is a bit outdated and insufficient. So I like to hear from other people with experience in working with Carmen, or student reports/projects dealing with Carmen.
You can find a great list with simulation environments http://www.intorobotics.com/robotics-simulation-softwares-with-3d-modeling-and-programming-support/
MRDS is one of the best and it's free. Also LabView is good to be used in robotcs
National Instruments' LabView is a graphical programming environment for developing measurement, test, and control systems.
It could be used for 3D control simulation with SolidWorks.
MRDS is free and is one of the best simulation environment for robotics. Workspace also can be used, and please check this link if you want a complete list with robotics simulation software
Trik Studio has a nice and clear 2D model simulator and also visual and textual programming programming environments for them. They also soon will support 3D modeling tools based on Morse simulator. Also it is free and opensource and has multi-language interface.