How can I modify driver "aggression" in the Road Traffic Library? - anylogic

Im doing a road traffic simulation and have noticed that the vehicles are far too passive. I know for a fact that in the area this simulation is supposed to be the people at the wheel are far more "aggressive" and give less headway and dont yield as much.
I have scoured every piece of official and non official documentation but cannot find an answer.

You can't.
The library is a black box that only "opens" the settings that you can see in the blocks. The only additional settings you can play with (beyond the block properties) is the "Road Network descriptor":
But there is no aggressiveness setting. If this is the key purpose of your model, you either create your own ABM logic from scratch or you'd need to check other tools that are focused on cars (but typically, these are then far more restrictive on other dimensions that AnyLogic is flexible on, it all depends on your model purpose)

Related

Making an overlay with vulkan [duplicate]

I am a mathematician and not a programmer, I have a notion on the basics of programming and am a quite advanced power-user both in linux and windows.
I know some C and some python but nothing much.
I would like to make an overlay so that when I start a game it can get info about amd and nvidia GPUs like frame time and FPS because I am quite certain the current system benchmarks use to compare two GPUs is flawed because small instances and scenes that bump up the FPS momentarily (but are totally irrelevant in terms of user experience) result in a higher average FPS number and mislead the market either unintentionally or intentionally (for example, I cant remember the name of the game probably COD there was a highly tessellated entity on the map that wasnt even visible to the player which lead AMD GPUs to seemingly under perform when roaming though that area leading to lower average FPS count)
I have an idea on how to calculate GPU performance in theory but I dont know how to harvest the data from the GPU, Could you refer me to api manuals or references to help me making such an overlay possible?
I would like to study as little as possible (by that I mean I would like to learn what I absolutely have to learn in order to get the job done I dont intent to become a coder).
I thank you in advance.
It is generally what the Vulkan Layer system is for, which allows to intercept API commands and inject your own. But it is nontrivial to code it yourself. Here are some pre-existing open-source options for you:
To get to timing info and draw your custom overlay you can use (and modify) a tool like OCAT. It supports Direct3D 11, Direct3D 12, and Vulkan apps.
To just get the timing (and other interesting info) as CSV you can use a command-line tool like PresentMon. Should work in D3D, and I have been using it with Vulkan apps too and it seems to accept them.

AS/RS simulation in Anylogic Simulation Software

I need to simulate a fully functional AS/RS in warehousing. Moreover, I am a complete beginner in this field. Can some please let me know if I could get readymade simulation file? Or if not, please let me know how to learn to do it.
I have checked out the Anylogic website and it's tutorials (They are too lengthy).
fortunately for you, i have developed an AS/RS example that is a ready-made downloadable model for you, available at https://cloud.anylogic.com/model/1f5c7d1f-8782-40ac-957d-d3ba97bf6bf0?mode=SETTINGS
In general, when you want a model example, the first thing you should do is check the anylogic cloud, and if you are lucky the model is downloadable. Unfortunately, most people don't share
It really depends on what type of ASRS you are modeling (shuttle versus unit load) and level of detail you need. Do you need specific slotting and inventory tracking, or simplier black box delays with the assumption inventory is always available? The level of detail you need depends on questions you are asking, and should be addressed prior to starting development. If results from the model are critical and urgent, and you need anything more than simple black box delays, you should consider outsourcing to an experienced professional until you can get your AnyLogic skills up to speed.

HOW do Unity World Anchors work on the HoloLens?

I'm currently building a HoloLens application and have a feature in-mind that requires holograms to be dynamically created, placed, and to persist between sessions. Those holograms don't need to be shared between devices.
I've had a nightmare trying to find (working) implementations and documentation for Unity WorldAnchors, with Azure Spatial Anchors seeming to stomp out most traces of it. Thankfully I've gotten past that and have managed to implement WorldAnchors by using the older HoloToolkit, since documentation for WorldAnchors in the newer MRTK also seems to have also disappeared.
MY QUESTION (because I am unable to find any docs for it) is how do WorldAnchors work?
I'd hazard a guess that it's based on spatial mapping, which presents the limitation that if you have 2 identical rooms or objects that move in the original room, the anchor/s is/are going to be lost.
What I'd LIKE to hear is that it's some magical management of transforms, which means my app has an understanding of its change in real-world location between uses even if the app is launched from a different location each time.
Does anybody know the answer or where I might look (beyond the limited Unity and MS Docs for this matter) to find out implementation details?
Thank you.
I'd hazard a guess that it's based on spatial mapping, which presents the limitation that if you have 2 identical rooms or objects that move in the original room, the anchor/s is/are going to be lost.
We won’t divulge the internal implementation details of the internal coding of the World Anchor but we can state that it is not based on GPS currently with HoloLens v1 or HoloLens v2. Currently, the World Anchor uses the data in the spatial map for placement. The underlying piece that is key is the anchors rely on the spatial scanning and the scanning can use wifi to improve the speed and accuracy, see these two references: 1 & 2
What I'd LIKE to hear is that it's some magical management of transforms, which means my app has an understanding of its change in real-world location between uses even if the app is launched from a different location each time.
It is certainly possible to have two identical rooms with exact layout to trick the mapping to think it is the same room. We document that here:
https://learn.microsoft.com/en-us/windows/mixed-reality/coordinate-systems#headset-tracks-incorrectly-due-to-identical-spaces-in-an-environment

Hololens-SpatialMapping (Unity3D)

I'm actually doing a project with the Hololens of Microsoft. The problem is that the Hololens memory is bad, so i can only make a spatialmapping of a room and not of a building because he can't remember all the building. I had an idea, maybe a can create more object and assemble them ? But no one talk about this... Do you think it's possible ?
Thanks for reading me.
Y.P
Since you don’t have a compass, you could establish some convention to help. For example, you could start the scanning by giving a voice command (and stop it by another one), and decide to only start scanning when you’re facing north, for example. Then it would be easy to know the orientation of each room. What may be harder is to get the angle exactly right. Your head might be off by a few degrees and you may have to work some “magic” (post processing) to correct it.
Or placing QR codes on a wall (printer paper + scotch tape) and using something like Vuforia can help you avoid this orientation problem altogether (you would get the QR code’s orientation which would match that of the wall).
You can also simplify the scanned mesh and convert it to planes. That way you can remember simpler objects instead of the raw spatial mapping mesh. (Search for the SurfaceToPlanes script in the Holographic Academy tutorials).
Scanning, the first layer, as in HoloLens trying to reason about the environment is an unstoppable process. There is no API for starting or stopping it. And that process also does slowly consume more and more memory as far as I know. The only thing you can do is deleting space (aka deleting holograms) or covering the sensors. But that's OS/hardware level, not app level, which you presumably want.
Layer two, what you are you probably talking about, is starting and stopping the spatial reconstruction process, where that raw spatial data is processed into a low-poly mesh (aka spatial mapping). This process can be started or stopped. For example through Unity's SpatialMappingCollider and SpatialMappingRenderer components, if you use Unity.
Finally the third level is extracting some objects/segments from that spatial mapping mesh into primitives. Like that SurfaceToPlanes. That you can also fully control in terms of when.
There has been a great confusion, especially due to the a re-naming parties in MixedRealityToolkit (overuse of word Scanning) and Unity (SpatialAnchor to WorldAnchor etc.) and misleading tutorials using a lot of colloquialisms instead of crisp terminology.
Theory aside. If you want the HoloLens to think of your entire building as one continuous space in terms of the first layer, you're out of luck. It was designed for a living room and there is a lot of voodoo involved into making it work stable in facilities 30x30 meters. You probably want to rely on disjointed "islands" with specific detection anchors to identify where you are. Or rely on markers and coordinates relative to them.
Cheers

Water simulation with a grid

For a while I've been attempting to simulate flowing water with algorithms I've scavenged from "Real-Time Fluid Dynamics for Games". The trouble is I don't seem to get out water-like behavior with those algorithms.
Myself I guess I'm doing something wrong and that those algorithms aren't all suitable for water-like fluids.
What am I doing wrong with these algorithms? Are these algorithms correct at all?
I have the associated project in bitbucket repository. (requires gletools and newest pyglet to run)
Voxel-based solutions are fine for simulating liquids, and are frequently used in film.
Ron Fedkiw's website gives some academic examples - all of the ones there are based on a grid. That code underpins many of the simulations used by Pixar and ILM.
A good source is also Robert Bridson's Fluid Simulation course notes from SIGGRAPH and his website. He has a book "Fluid Simulation for Computer Graphics" that goes through developing a liquid simulator in detail.
The most specific answer I can give to your question is that Stam's real-time fluids for games is focused on smoke, ie. where there isn't a boundary between the fluid (water), and an external air region. Basically smoke and liquids use the same underlying mechanism, but for liquid you also need to track the position of the liquid surface, and apply appropriate boundary conditions on the surface.
Cem Yuksel presented a fantastic talk about his Wave Particles at SIGGRAPH 2007. They give a very realistic effect for quite a low cost. He was even able to simulate interaction with rigid bodies like boxes and boats. Another interesting aspect is that the boat motion isn't scripted, it's simulated via the propeller's interaction with the fluid.
(source: cemyuksel.com)
At the conference he said he was planning to release the source code, but I haven't seen anything yet. His website contains the full paper and the videos he showed at the conference.
Edit: Just saw your comment about wanting to simulate flowing liquids rather than rippling pools. This wouldn't be suitable for that, but I'll leave it here in case someone else finds it useful.
What type of water are you trying to simulate? Pools of water that ripple, or flowing liquids?
I don't think I've ever seen flowing water ever, except in rendered movies. Rippling water is fairly easy to do, this site usually crops up in this type of question.
Yeah, this type of voxel based solution only really work if your liquid is confined to very discrete and static boundaries.
For simulating flowing liquid, do some investigation into particles. Quite alot of progress has been made recently accelerating them on the GPU, and you can get some stunning results.
Take a look at, http://nzone.com/object/nzone_cascades_home.html as a great example of what can be achieved.