3d object error once the model start running - anylogic

i am trying to use one of the 3d Objects in Anylogic , once i drag the item , it doesn't show the real shape of the object upon running and it shows the following enter image description here

check on the resources section of your project if they are actually there... sometimes you move the .alp somewhere else and you lose the location of the objects... or sometimes AnyLogic just makes our life difficult
if you have issues with the elements, you will see a red or grey circle instead of green.

Related

Unity application to simulate a foveal field of view - but how?

For research purposes, I would like to create a Unity VR 3D application that (more or less) simulates the foveal field of view of a person. This means, in particular, I would like to render the whole environment of the application in the full field of view, but certain objects of interest I only want to render in the foveal area.
For the purpose of explaining the problem, I created a simple 2D picture. Please assume it's 3D. In the picture, the green area is the peripheral field of view, and the yellow area is the foveal field of view. The whole environment, like walls, sky, etc., should get rendered in the green and in the yellow area. Particular objects of interests, here the flowers, however, should only get rendered only in the yellow area and - importand - these objects should get cut off when reaching the green area. With this approach, I want to force people moving their head instead of just moving the eyes.
Any idea how to achieve this? Is it possible to use a kind of mask or filter? Or do I need a stencil shader? I looked around but could not find the correct approach.

Displaying ARKit nodes in relation to real objects

I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.

know object behind another object is fully visible

I have developed scratch card effect.I am stuck at logic of how can I know object got visible which is behind the scratch card image? So that I can show reward screen.
PS: with modifications in this link I able to work this scratch card effect in uGUI.
There are many ways you could go about this. Assuming you know the dimensions of the red "target image" that the user is trying to uncover, you could take a fixed number of samples from the area that the target is under. Once, say, 80% of those samples are transparent (i.e. the target is visible at those positions), you can consider the object visible and show the reward screen.
You can use GetPixel to get the individual samples from the scratch texture.

Getting Unity 2D Perspective Camera + World Space Canvas Image sorting to work

These are my current settings and let me know if you all need more information to help me solve this issue.
Camera : Perspective
Canvas Render Mode : World Space
In the picture below the focus here is when the user wants to select a ship of their choice they click where "Select V" is and you see "Fighter ship image, but the problem here is that not only is the "Green Circle" in the way but the dropdown itself it just not big enough.
In this second picture below you can see the scaling has been done to make the dropdown more visable but as you can see in this picture below that it is hidden behind another ship select box (ship2 in the Hierarchy).
I have tried making the Z coordinate larger/small and even if I have it come closer to the camera it still is represented behind the ship2 gameobject. I am at a total loss for ideas on how to approach this and if anyone could shed some light on this that would be awesome!
Here are 2 more screen shots just in-case the first 2 images were not enough information to go on.
If I understood your question correctly, your UI is behind the ship but you want it to be above the ship. If that's the case read below, else leave a comment.
objects in the hierarchy is rendered based on the order of your objects in the hierarchy, not the depth. The Unity UI is rendered from top to button in the hierarchy. Don**'t go changing the z-axis if you want to change the display order. It doesn't work like **NGUI.
If you want any object to be displayed on top, it has to be put bellow the object in the hierarchy NOT on top.
If object A is on top of object B in the scene, the problem is from the hierarchy. Go to the hierarchy and put object B below object A if you want object B to be on top of object A.
Also, don't scale the UI the way you did in picture #2. Change the scale of shipRow1 back to 1,1,1 then use the Width and the Height properties to change its size.

Objective C Drawing free form shape on top of image

I am working on a iOS 5 iPhone project where users can choose an image on their device, then trace an object inside of the picture (trace an apple out of a picture of a fruit basket), and then the picture needs to be uploaded with the "tagged" object so it can be pulled down later. Other people will then pull down the image and try to find where the tagged object is in the picture (Think "Where's Waldo?").
I have been trying to figure out the best way of tracing the object. Before, I had a user press the top left, top right, bottom left, and bottom right points around the object and create a square view around the object. The info for that view was uploaded and then pulled down for the user to find the object. The downside is that all objects are obviously not squares/rectangles so I need to do a free form shape.
I was thinking of allowing the user to draw over the object and then somehow I need to be able to tell what is inside of the trace (For example, inside of a circle that they traced), but a problem I forsee is making sure the trace they made is closed so I can fill in the shape (which is a whole not problem).
Any advice welcome on the best way of starting this.
Thanks!
UIBezierPath might be a very useful friend here. It allows you to create any shape you need, and it supports both drawing and hit-detection. I recently did an implementation for a storybook where I could trace out a shape with their finger, freeze it, and then use the shape for tap detection.
The basic idea is this:
When your finger touches the screen, start recording positions. Discard any positions that are too close to the previous position (eg, only record a point if it is >min-distance from the last recorded point). While doing this, you can draw the UIBezierPath so you can see what you are tracing out. Modify the UIBezierPath by adding points to it, instead of recreating it every time.* When you lift your finger, close the bezier path. Quite simple.
Now, this will result in a polygon (ie, straight edges). If your min-distance is low enough or if you are using it for hit-detection (as you say), it won't really matter. However, if you want to smooth the path, you have to use the curve-to methods, which slightly complicate it - but should you wish to follow up on this more, read up on splines and spline generation from a point series.
*note: otherwise you'll get lag while drawing large shapes because recreating a bezier path from an increasingly large series of points gets expensive. Modifying the existing path makes it much, much, much faster.