3D Animation Performance Issues in AnyLogic - anylogic

In my 3D animation in AnyLogic, if I zoom out, the quality decreases, when I zoom in, it becomes fine. Example shown in these images:
vs
Are there settings that can prevent this quality decrease from happening?

I got a reply from AnyLogic support as follows:
Right now, we are working to improve the performance of 2D and 3D
animation at run-time. In particular, in the latest version, the
level of details changes while zooming in 3D window, i.e. objects that
are further away have less detail than closer objects. On your
screenshot, 3D objects look unacceptably bad at a relatively short
distance and you can do nothing with this. However, as far as I know,
there will be an option to tune the distance where the quality
decreases in the next update.

Related

Unity3D - Object will disappear when looking directly too it or too close

Both hand with disappear in the game and scene if i get close enough or look at them any solution. m using unity URP:
it only happens when using animation rigging
Make sure the bounding boxes (i.e Bounds) of your SkinnedMeshRenderers are set to correct dimensions. Culling decisions are made based on those boxes. If they are too small, the mesh might get culled even if it's still visible.
You can also check "Update When Off Screen" to have the bounds updated while animating. This does come at a slight performance cost however, so see if it's suitable for you.

Models and mesh start distorting and wobbling - Unity

I'm making a simple space simulator project and the models and the mesh start wobbling and distorting after certain speeds. The code has no errors, so I will give you a
gameplay video.
As you can see in the gameplay video, even simple cylinders get distorted.
Then, I got an interesting
bug/error.
Is there a solution to said problem?
Make sure you’re not too far away from the origin of the scene. Things get less precise the further away you go from the origin, and it’s to do with floating point precision.

AR Overlay Accuracy in Google Project Tango

I am experimenting with overlaying augmented reality objects over a pass-through image from the rear camera in Unity.
Has anyone experimented with overlaying objects with accurate tracking? I've tweaked the movement scale to get somewhat decent results but rotation is still not accurate and drift is a big issue.
I've had good luck with the augmented reality sample that ships with the latest tango. in my experience it does work the way you speculated where if you add items to the unity scene they are synced to motion detected by the device.
I believe the tracking and syncing function have improved since you asked this question originally because I've noticed an improvement since I got my tango devkit a month or so ago. there was an update a week or so later, with an immediate improvement.
I have found that some scenes track better than others, it seems to help for there to be additional scenery for it to track. in my workspace, a fairly cluttered apartment, it tracks well but in the neighboring identical apartment unit which is currently vacant and empty, it does not track as well. that could also be a product of the blinds hanging up in my unit that are not hanging up in the vacant unit, filtering out additional infrared.
I'm experimenting with placing 3D objects over the real time input from the Tango color camera.
One problem here is that the hardware color camera 'point' in a (strange) direction. I wasn't able to get the direction vector from the api until now. Your virtual camera for rendering the scene needs this rotation to render 3D objects properly.
There are augmented reality examples of Tango's Unity plugin:
https://developers.google.com/tango/apis/unity/unity-simple-ar
They solve this problem with a matrix that rotates the 3d camera.
It can be found in the Unity script "TangoARPoseController" (C#) that, when attached to a unity camera, rotates it so that it looks at the scene in the right direction. The matrix is obtained in the method "SetCameraExtrinsics" of that script.
Unfortunately, when I apply the matrix to my unity scene it does not produce a perfect overlay (actually it's quiet bad). But I have other sources of position input which may be the problem here.
However, until now I'm not sure if the matrix used in the examples is good enough for accurate ar overlays. Maybe it is just suitable for demonstration purposes. But it should be a good starting point for further investigation.
Are we talking about displaying the 'webcam' in the background as opposed to a skybox ?
Take a look at my GhostHunter repo. It includes a shader and a script for displaying the rear facing camera 'behind' the gameplay objects (like the skybox). It should be useable with Tango and it is better than the 'display on a mesh' technique I`ve seen others used.
https://github.com/NVentimiglia/Augmented-Reality-Ghost-Hunter

KLT Tracker for human tracking in CCTV

I am trying to use a KLT tracker for human tracking in a CCTV footage. The people are very close to the CCTV. I noticed that some time people change the orientation of the heads and also the frame rate is slightly slow. I have read from Rodrigues et al. paper Section 3.4 that the:
"This simple procedure (KLT tracking procedure) is extremely robust and can establish matches between head detections where the head HAS NOT BEEN continuously detected continuously detected due to pose variation or partial occlusions due to other members of the crowd".
Paper can be found in this link : Rodriguez et al.
1). I understood that the KLT tracker is robust to pose variations and occlusions. Am I right?
I was trying to track one single person in footage till now by using the MATLAB KLT as in :
MATLAB KLT
However, the points were not being found after JUST 3 frames.
2). Can someone explain why this is happening or else a better solution to this. Maybe using a particle/Kalman filter should be better?
I do not recommend using a KLT tracker for close CCTV cameras due to the following reasons:
1. CCTV frame rate is typically low, so people change their appearance significantly between frames
2. Since the camera is close to the people, they also change their appearance over time due to perspective effects (e.g. face can be seen when person is far from camera, but as he/she gets closer, only the top of the head is seen).
3. Due to closeness, people also significantly change scale and aspect ratio, which is a challenge for some head detectors.
KLT only works well when the neighborhood of the pixel, including both foreground and background, remains similar. The above properties make this less likely for most pixels. I can only recommend KLT as an additional motion based hint for tracking, as a vector of field of part motions.
Most single person trackers do not adapt well to scale change. I suggest you start with some state of the art tracker, like Struck (C++ code by Sam Hare available here), and modify the search routine to work with scale change.
KLT by itself only works for short-term tracking. The problem is that you lose points because of tracking errors, 3D rotation, occlusion, or objects leaving the field of view. For long-term tracking you need some way of replenishing the points. In the multiple face tracking example the new points are acquired by periodically re-detecting the faces.
Your particular case sounds a little strange. You should not be losing all the points after just 3 frames. If this happens than either the object is moving too fast, or your frame rate is too low.

How to move the game object according the movement of real world object in the web cam in unity?

i want to develop a tan gram game in unity with the concept of augmented reality. i want to make tan gram figures using real tan grams in front of a webcam ,according to the tan gram figure in the screen. For that i want to place the game object with respect to the real tan gram in the camera frame. i also want to change the position and angle accordingly. please suggest a way to achive this. Thanks in advance!!!!
With difficulty.
If you want to do this without some sort of custom built hardware controller on the real tan gram, you will need some quite intricate image processing techniques. The following are some vague steps and pointers to achieve what you want. If there is a better option I cannot think of it, but this is very conceptual and by no means guaranteed to work - Just how I would attempt the task if I really had to.
Use a Laplacian operator on the image to calculate the edges
Use this, along with the average colour information in pixels to the left/right and above/below of each "edge" pixel (within a certain tolerance) to detect the individual shapes, corners, and relative positions starting from the centre of the image.
Calculate the relative sizes of each shape and and approximate the rotation using basic trigonometry.
However I can't help but feel like this is an incredibly large amount of work for such a concept, and could be so intensive to calculate this for each pixel to make it truly not worth your time. Furthermore it depends a lot on the quality of the camera used, and parallax errors would probably be nightmarish to resolve. Unless you are truly committed to this idea, I would either search for some pre-existing asset that does this for you or not undertake the project.