In Unity, how can I get the coordinates of the position of this object? - unity3d

In Unity
When 3d objects are placed in reality using , how can I get the coordinates of the position of this object?

This is a very unclear question, be more precise.
If you want to move the object, then these videos might help: https://www.youtube.com/watch?v=tNtOcDryKv4
https://www.youtube.com/watch?v=9ZEu_I-ido4
If you want to see the coordinates for your object, then click on the "transform" tab.
If you want to set coordinates of an object in script, then look here: https://answers.unity.com/questions/935069/how-do-i-set-an-objects-coordinates-in-scripts.html

Related

In Unity, are there any ways to get object information pixel in screen is rendering?

I'm trying to figure out how to get object's information pixel in screen is rendering in shader.
I'm trying to make a 3d pixelation shader. And this is done by
1.getting the render texture from camera.
2.and pixelate that using shader graph
and it works fine.
I've also managed make a pixel outline.
But the problem is when the two objects are overlayed, the outline just gets drawn as if those were the same object.
I'm not exactly sure how to get over this but my idea is to,
1.Somehow get the object information the pixel in render texture in shader.
2.And draw outlines seperately based on that info
But even after days of researching. I couldn't get it working.
If you have any documents or information about accessing object in shader, or just have another way of doing this instead of this method. I would be glad to hear it. Thanks.
These are what i've tried and thought of so far
1.Google "Unity get access to object of pixel camera is rendering" (but couldn't find anything useful)
2.Just give object outline before pixelating(It sort of works but it is jittery)
3.Get the object information based on its depth value using depth texture(It kind of works but it's unstable because if two objects stay close, there's no way to distinguish them)
4.Get the object information by giving raycast on every single position pixel will render. (But it'll have to have 100k+ raycasts and use GetComponent 100k+ times every frame, which will be expensive)
First read about deferred rendering, maybe it will help you to find a nice solution.
Second, you can assign an ID to each object, then you could render all objects into a render texture using this ID as a color. Then use that render texture in your shader to differentiate objects. Some bit logic and you'll get what you need.

How to set dynamic hotspot for 360 image with unity 3D

I am trying to build a visitors tour with Unity 3D. I have panaromic picture of bedrooms within an hotel and I would like to add points (hot spots) to my pictures that leads to another picture.
The problem is that I want to add this point dynamically via a backend, and I can't find a way to achieve that in Unity.
I will try to answer this question.
Unity has a XYZ coordinate system that can be translated to real world. I would measure real distances to these points (from the center where you took your picture) in your location/room and send these coordinates via backend to Unity3D client.
In Unity you can create Vector3 positions or directions based on coordinates you sent before. Use these positions/directions to instantiate 'hotspots' objects prefabs in right positions and directions. It might be necessary to adjust the scale/units to get the right result.
Once you have your 'hotspot' objects in place add a script to them that will load new scene (on click) with another location/image and repeat the process.
This is a very brief suggestion on how to do it. The code would be quite simple.

Clickable and Rotatable models using Unity and ARToolkit

Can the 3D models used with the markers be made clickable and rotate able using Unity3D engine and ARToolkit unity. Basically in the AR desktop application we are making , we want to implement this functionality on the 3D models as well. Kindly need help in this matter. ThankYou.
The only issue you could find is that the Transformation of the AR Tracked Object changes based on the visual information.
If you simply use that as a base object and put any other models as children of that one, you can modify their position and rotation in the same way you would do for any other situation.
So, as you see, the Marker scene has nothing but an AR Tracked Object, this is the object that will be updated with the pose of the marker.
Then, the Cube is a child of this object. If you modify the localPosition or localRotation of the cube it will work as you want it.
The Cube is a child of the Marker scene, its global position and rotation will be a combination of the position and rotation of its parent with its local position and rotation (this is standard 3DEngine / SceneGraph behaviour).

Finding pixel on sprite in Unity

In Unity3d we have RaycastHit.textureCoord but it doesn't exist anymore in 2D. I was searching about this problem, a lot but I didn't find anything useful.
So I want to know the solution for this problem and I'm wondering why method like textureCoord in 3D doesn't exist in 2D actually in RaycastHit2D.
also I want to access to the pixel when mouse cursor on it.
It works in 3D because RaycastHit.textureCoord requires a mesh collider. In the 2D case it is way simplee because you can calculate the position yourself as you know the sprite hit, the cursor position and size of the sprite.

OpenGL ES tiled object (cube?), with clickable tiles

I'm starting study opengl, and im tring to make a 3d chess like, but i cant figureout, how i can know where i have clicked in the "table" to make the proper animations, any advice ?
This is called "3D picking". You have to translate screen coordinates into world coordinates. From there, do a ray/collision object (bounding box?) intersection test. If they intersect, that's where the user clicked.
You'll have to do a little bit more than this to solve the depth-order problem, like finding the first time to intersection of each object, then selecting the one with the lowest (positive) time.
If you google for "3D picking" you might find what you are looking for.
Here is a tutorial:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=32
Note that this is not specific to any shape of bounding object, be it a bounding box, a polygon, a curve, etc. You just have to figure out the math for the intersection test for each type of object you want to support.
Edit:
I didn't read that tutorial before I linked it, I just figured NEHE is where all the cool kids learn OpenGL (admittedly ten years ago...).
Here is something from the OpenGL FAQ about picking:
http://www.opengl.org/resources/faq/technical/selection.htm
waldecir, look for a raypick function. It's the name for sending a ray from the scene's camera center through the pixel you clicked on (actually, through that pixel's translated position on the camera's plane representing the "glass surface of the screen" in the 3D world) and return the frontmost polygon the ray hits together with some information. Usually coordinates within the polygon's surface axes, e.g. UV or texture coordinates. By checking the coordinates, you can determine which square the user clicked on.
Rays can be sent from any position and in any direction, so likely you'd have to get the camera position and its plane center, but the documentation should be able to help you there.