Co-Simulation: AnyLogic and Unreal Engine - simulation

I understand that AnyLogic has a 3D Engine to visualize simulations, as described in the AnyLogic Documentation. Since the Unreal Engine is one of the most powerful 3D engine, I was wondering if it would be possible to couple these two tools, maybe using an FMI. Therefore, my question is, how one could iteratively 1.) calculate a timestep in AnyLogic, 2.) visualize the results in Unreal.
Thank you for your replies!

Related

Agent based modeling for Supply Chain Management

I am a final year student of Computer System Engineering.My FYP is Agent Based Modeling for Supply Chain Management. I don't know how to start it or which software should I use. Repast? Netlogo? Anylogic?
Please guide me how to proceed with my project.
Even though this question is opinion-based, I was part of a project whose objective was to compare exactly the softwares you mentioned here: Repast, Netlogo and AnyLogic. What I did in this study was to create AnyLogic models. Someone else was creating the models in Netlogo and Repast.
There is no paper with the conclusions yet, but the person who did this study told me that AnyLogic was better in terms of easy to use and scalability than the others and Repast has the steepest learning curve. Nevertheless for small scale projects, AnyLogic and Netlogo are equally suited so it doesn't matter which one you choose.
Nevertheless, remember that AnyLogic on its free version allows you to use only 10 agents, which is a lot, but if your project is really big and you want everything for free, you may encounter a problem there.
As a platform that allows to work with agents that include in the same platform: editing code, show the area of simulation and generate graphics. I think Netlogo platform is suitable for this reason. you can also link it with other programming languages like python and R.

Arduino CAD Simulation in real time

I have the following scenario: i am building a animatronic hand using some flex sensor, arduino board and 5 servo. No problem on this side.
But i have the following idea: to build a 3D CAD model of the hand in Catia, or in any other CAD program, and in real time the virtual hand to copy the movements of the real hand in real time.
I used something in Matlab when i did some plots in real time with some data from some sensor. It is posible to do that in a CAD program? To get the data from arduino and based on that data to simulate the movements of the virtual hand in real time. Can you tell if it posibile in wich program can i do the simulation?
Lucian
This is absolutely possible with Catia as long as your can get your arduino data into the computer. Likely a lot of other CAD software packages too. Such as Solidworks, AutoCAD, UX, etc. They offer an API which would allow you to update the cad model from a script in "real-time" based on your animatronic hand. You could probably go both ways, CAD-drives-Hand and Hand-drives-CAD. The one issue I see happening with a CAD software is the real-time aspect. Depending on the how graphically intensive your CAD model is, there is a computational time overhead to re-draw/update the model position. So, if your hand is moving quickly with complex gestures and you have a complex 3D model, there might be some delay in the movements on screen.
Lastly, you might want to look at animation/cad/rendering software that has a more powerful native rendering engine, like Maya, or Rhino to accomplish this. (I would try Rhino first).
If you choose Catia, please ask a different question or update this one specifically regarding how to control a 3D model via the API and I could help to answer that also.

Generate a real time 3D (mesh) model in Unity using Kinect

I'm currently developing an application with the initial goal of obtaining, in real time, a 3D model of the environment "seen" by a Kinect device. This information would be later on used for projection mapping but that's not an issue, for the moment.
There are a couple of challenges to overcome, namely the fact that the Kinect will be mounted on a mobile platform (robot) and the model generation has to be in real-time (or close to it).
After a long research on this topic, I came up with several possible (?) architectures:
1) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), then a Mesh and then export it into Unity for further work.
2) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), export it into Unity and then convert it into a Mesh.
3) Use KinectFusion that already the option of creating a Mesh model, and (somehow) automatically load the Mesh model created into Unity.
4) Use OpenNI+ZDK (+ wrapper) to obtain the depth map and generate the Mesh using Unity.
Quite honestly, I'm kinda lost here, my main issue is the real-time requirement along with being forced to integrate several software components makes this it tricky problem. I don't know which if any of these solutions are viable and the information/tutorials on these issues isn't exactly abundant like the one, for example, for Skeleton tracking .
Any sort of help would be greatly appreciated.
Regards,
Nuno
Sorry, I might not be providing a solution for realtime mesh creation within Unity - but the process discussion here, was interesting enough for me to reply.
In the hard science novel Memories with Maya - there is discussion of exactly such a scenario:
"“Point taken,” he said. “So… Satish showed me a demo of the Quad [Quad=Drone] acquiring real-time depth and texture maps.”
“Nothing new in that,” I said.
“Yeah, but look above us.”
I tilted my head up. The crude shape of the Quad came into view.
“The Quad is here, but you can't see it because the FishEye [Fisheye=Kinect 2] is on it aimed straight ahead.”
“So it's mapping video texture over live geometry? Cool,” I said.
“Yeah, the breakthrough is I can freeze a frame… freeze real life as it were, step out of the scene and study it.”
“All you do is block out the live world with the cross polarizers?”
“Yeah,” he said. “It's a big deal for AYREE to be able to use such data-sets.”
“The resolution has improved,” I said.
“Good observation,” he said. “So has the range sensing. The lens optics have also been upgraded.”
“I noticed that if I turn around I don't see the live feed, just the empty street,” I said.
“Yes, of course,” he replied. “The Quad is facing the other way around. It's why I'm standing in front of you. The whole street, however, is a 3D model done by a standard laser scan taken from the top of that high tower.”
Krish pointed to a building block at the far end of the street. I turned back to the live 3D view again. He walked in front of me.
“This is uber cool. Everyone looks so real.”
“Haha. You should see how cool it is when you're here in person with the Wizer on,” he said. “I'm here watching these real people pass by, only they have a mesh of themselves mapped onto them.”
“Ahhh! Yes.”
“Yeah, it's like they have living paint on them. I feel like reaching out and touching, just to feel the texture.”...
The work that you're thinking of doing in this area, and this use of a live mesh goes far beyond Projection Mapping for events- for sure!
Wishing you the best on the project, and I will be following your updates.
Some of the science behind the story is on www.dirrogate.com if the topic interests you.
Kind Regards.
I would use Kinect Fusion, as it has a sample with the ability to export to .obj, which unity supports. You can automatically save it, and import it to unity to generate a mesh automatically. Especially if you have multiple Kinects, then Microsoft even has a sample to show the basics of Kinect Fusion with multiple Kinects. Also, since Fusion is already pre-written, there is not much code you will have to write.
Here is an example of a mesh from Fusion with one camera:
I do want you to notice how many vertices there are though... This could cause performance problems later on.
Good luck!

How to use neural networks to solve "soft" solutions?

I'm considering using a neural network to power my enemies in a space shooter game i'm building and i'm wondering; how do you train neural networks when there is no one definitive good set of outputs for the network?
I'm studying neural networks at the moment, and they seem quite useless without well defined input and output encodings, and they don't scale at all to complexity (see http://en.wikipedia.org/wiki/VC_dimension). that's why neural network research has had so little application since the initial hype more than 20-30 years ago while semantic/state based AI took over everyone's interests because of it's success in real world applications.
A so a good place to start might be to figure out how to numerically represent the state of the game as inputs for the neural net.
The next thing would be to figure out what kind of output would correspond to actions in the game.
think about the structure of neural network to use. To get interesting complex behavior from neural networks, the network almost has to be recurrent. You'll need a recurrent network because they have 'memory', but beyond that you don't have much else to go on. However, recurrent networks with any complex structure is really hard to train to behave.
The areas where neural networks have been successful tend to be classification (image, audio, grammar, etc) and limited success in statistical prediction (what word would we expect to come after this word, what will the stock price be tomorrow?)
In short, it's probably better for you to use Neural nets for a small portion of the game rather as the core enemy AI.
You can check out AI Dynamic game difficulty balancing for various AI techniques and references.
(IMO, you can implement enemy behaviors, like "surround the enemy", which will be really cool, without delving into advanced AI concepts)
Edit: since you're making a space shooter game and you want some kind of AI for your enemies, I believe you'll find interesting this link: Steering Behaviors For Autonomous Characters
Have you considered that it's easily possible to modify an FSM in response to stimulus? It is just a table of numbers after all, you can hold it in memory somewhere and change the numbers as you go. I wrote about it a bit in one of my blog fuelled deleriums, and it oddly got picked up by some Game AI news site. Then the guy who built a Ms. Pacman AI that could beat humans and got on the real news left a comment on my blog with a link to even more useful information
here's my blog post with my incoherant ramblings about some idea I had about using markov chains to continually adapt to a game environment, and perhaps overlay and combine something that the computer has learned about how the player reacts to game situations.
http://bustingseams.blogspot.com/2008/03/funny-obsessive-ideas.html
and here's the link to the awesome resource about reinforcement learning that mr. smarty mcpacman posted for me.
http://www.cs.ualberta.ca/%7Esutton/book/ebook/the-book.html
here's another cool link
http://aigamedev.com/open/architecture/online-adaptation-game-opponent/
These are not neural net approaches, but they do adapt and continually learn, and are probably better suited to games than neural networks.
I'll refer you to two of Matthew Buckland's books.
Programming Game AI by example
AI Techniques for Game Programming
The second book goes into back-propagation ANN, which is what most people mean when they
talk about NN anyway.
That said, I think the first book is more useful if you want to create meaningful game AI. There's a nice, meaty section on using FSM successfully (and yes, it's easy to trip yourself up with a FSM).

Robot simulation environments

I would like to make a list of remarkable robot simulation environments including advantages and disadvantages of them. Some examples I know of are Webots and Player/Stage.
ROS will visualize your robot and any data you've recorded from it.
Packages to check out would rviz and nav_view
This made me remember the breve project.
breve is a free, open-source software package which makes it easy to build 3D simulations of multi-agent systems and artificial life.
There is also a wikipage listing Robotics simulators
Microsoft Robotics Studio/Microsoft Robotics Developer Studio 2008
Also read this article on MSDN Magazine
It all depends on what you want to do with the simulation.
I do legged robot simulation, I am coming from a perspective that is different than mobile robotics, but...
If you are interested in dynamics, then the one of the oldest but most difficult to use is sd/fast. The company that originally made it was acquired by a large cad outfit.
You might try heading to : http://www.sdfast.com/
It will cost you a bit of money, but I trust the accuracy of the simulation. There is no contact or collision model, so you have to roll you own. I have used it to simulate bipeds, swimming fish, etc.. There is also no visualization. So, it is for the hardcore programmer. However, it is well respected among us old folk.
OpenDynamics engine is used by people http://www.ode.org/ for "easier" simulation. It comes with an integrator and a primitive visualization package. There are python binding (Hurray for python!).
The build in friction model.. is ... well not very well documented. And did not make sense. Also, the simulations can suddenly "fly apart" for no apparent reason. The simulations may or may not be accurate.
Now, MapleSoft (in beautiful Waterloo Canada) has come out with maplesim. It will set you back a bit of money but here is what I like about it:
It goes beyond just robotics. You can virtually anything. I am sure you can simulate the suspension system on a car, gears, engines... I think it even interfaces with electrical circuit simulation. So, if you are building a high performance product, than MapleSim is a strong contender. Goto www.maplesoft.com and search for it.
They are pretty nice about giving you an eval copy for 30 days.
Of course, you can go home brew. You can solve the Lagrange-Euler equations of motion for most simple robots using a symbolic computation program like maple or mathematica.
EDIT: Have not be able to elegantly do certain derivatives in Maple. I have to resort to a hack.
However, be aware of speed issue.
Finally for more biologically motivated work, you might want to look at opensim (not to be confused with OpenSimulator).
EDIT: OpenSim shares a team member with SD/Fast.
There a lots of other specialized simulators. But, beware.
In sum here are the evaluation criteria for a simulator for robot oriented work:
(1) What kind of collision model do you have ? If it is a very stiff elastic collision, you may have problem in numerical stability during collisions
(2) Visualization- Can you add different terrains, etc..
(3) Handy graphical building tools so you don't have to code then see-what-you-get.
Handling complex system (say a full scale humanoid) is hard to think about in your head.
(4) What is the complexity of the underlying simulation algorithm. If it is O(N) then that is great. But it could be O(N^4) as would be the case for a straight Lagrange-Euler derivation... then your system just will not scale no matter how fast your machine.
(5) How accurate is it and do you care?
(6) Does it help you integrate sensors. For mobile robots you need to have a "robot-eyes view"
(7) If it does visualization, can it you do things like automatically follow the object as it is moving or do you have to chase it around?
Hope that helps!
It's not as impressive looking as Webots, but RobotBasic is free, easy to learn, and useful for prototyping simple robot movement algorithms. You can also program a BasicStamp from the IDE.
I've been programming against SimSpark. It's the open-source simulation engine behind the RoboCup 3D Simulated Soccer League.
It's extensible for different simulations. You can plug in your own sensors, actuators and models using C++, Ruby and/or RSG (Ruby Scene Graph) files.
ABB has a quite a solution called RobotStudio for simulating their huge industrial robots. I don't think it's free and I don't guess you'll get much fun out of it but it's quite impressive. Here's a page about it
I have been working with Carmen http://carmen.sourceforge.net/ and find it useful.
One of the disadvantages with Carmen is the documentation with all respect I think the webpage is a bit outdated and insufficient. So I like to hear from other people with experience in working with Carmen, or student reports/projects dealing with Carmen.
You can find a great list with simulation environments http://www.intorobotics.com/robotics-simulation-softwares-with-3d-modeling-and-programming-support/
MRDS is one of the best and it's free. Also LabView is good to be used in robotcs
National Instruments' LabView is a graphical programming environment for developing measurement, test, and control systems.
It could be used for 3D control simulation with SolidWorks.
MRDS is free and is one of the best simulation environment for robotics. Workspace also can be used, and please check this link if you want a complete list with robotics simulation software
Trik Studio has a nice and clear 2D model simulator and also visual and textual programming programming environments for them. They also soon will support 3D modeling tools based on Morse simulator. Also it is free and opensource and has multi-language interface.