I am messing with Unity's cameras for a school project, my plan was to change the way that coordinates are projected onto the projection plane to a projection onto a sphere using sphere coordinates. But, getting to the actual math behind the cameras has been a bit of a pain.
My first method involved messing with render textures but, theoretically, that won't work because the camera has already rendered a texture which I am modifying.
Next I tried to get into the code for the base camera, maybe make a copy of the camera to modify without messing the original, but then I ran into this, the code that sets all of the camera's parameters for the editor, I saw the reference to a few .h files.
Where can I access these files? I found files with the same names but not related to Unity. They were also different from each-other, making me think that the file above isn't referring to some sort of industry standard, but it might be.
Unity is generally considered to be made up of two parts; the managed front end, and the unmanaged back end. The front end code (written in C#) can be studied on GitHub here. The unmanaged code (written in C++) is proprietary and isn't freely available.
Unity is fairly modifiable, but there are a number of rules you have to follow.
A camera workaround might be to work with the Scriptable Render Pipeline (e.g. URP). But I'm not sure this actually addresses what you're trying to achieve.
Related
I've decided to use Minecraft like characters in my small game since I do not know how to make 3d models (nor I want to learn how to do such thing in the near future).
However the task now seem a little harder than expected:
I've tried looking in the asset store for prefabs to use but without success.
So, I've decided to try and make a model on blender(by not knowing a thing about non parametric 3d modeling, my knowledge of blender is extremely limited) and import it into my unity game.
And surprisingly, I managed to create the model using McPrep, export it and import it into unity maintaining objects that drive the bones (the output is a bit messy but I think I can manage to clean it up a little).
However the imported version does not have any skin and appears in a gray shade.
Turns out that the output does not keep materials/textures with it!
I've tried getting the texture used by blender and it returns the same skin I fed into mcprep so, by using the same skin, I've tried creating a material with it by getting the .png and using it as texture in a unlit texture material.
However, the result is a bit messy as shown here (left is Blender, right is Unity):
How may I make the texture on unity3d be better and right? (I've heard there is a way using a C# script but I really don't know what it is nor how to do it)
EDIT:
Thanks for the answers before, I did set the filter to point obtaining the texture to be a bit better. However the part that should be transparent is displayed in black on top of the other part (I think).
The image on the right is only filter point and the one on the left is point + alpha is transparency and the transparent shader using unlit transparent
ANSWER FOUND:
As Bart said, make sure the textures' Filter Mode is set to Point, but additionally:
Minecraft player characters are made of 2 layers, the second layer usually has lots of transparency and is used for clothing or other relief detail. So you need to use a transparency-capable shader on your material in Unity.
You're running into a filtering issue. In your case you want no filtering to happen. So select your texture, and in the inspector change the import settings so that your "Filter Mode" is set to "Point". In this case it will do no filtering of the input and your large pixels should appear sharp as you want.
I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?
Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)
Imagine a large free-roam game in Unity,
The yellow size indicates about the largest you can make a typical Terrain in Unity.
Art dept. will completely build, meter by meter, the entire scene.
Please note, this has absolutely no connection to repeating scenery (as in runners) or procedural scenery (as in say some race games).
Really, what is the correct and good way to do this in Unity?
use say 50 or so terrains, each perhaps 100m x 100m ?
can you even have or use that many terrains?
or what?
For anyone googling here.
The correct solution is indeed
Terrain stitching
that's it.
In practice you must use one of the tools available to do this (eg, TerrainFormer) or, your team will write from scratch a terrain stitcher.
Yes, you just use "many terrains".
The best approach to the exact problem posed,
is in fact to just:
"use lots of Terrains".
It seems to be perfectly viable in Unity to have many (dozens) of Terrain units, basically "sitting next to each other".
In practice, you'll need TerrainFormer
https://assetstore.unity.com/packages/tools/terrain/terrain-former-20052
or one of the similar tools.
(Or, I suppose, from scratch write your own tool to stitch terrains, and allow you manipulate them all at once, join the edge-heights perfectly, etc etc.)
It's not realistically possible to just perfectly sit many Terrains together (by hand), matching all the edges, etc etc. So you're going to need a "stitcher" package for putting many Terrain squares together.
So, this huge area ..
has about 12 Terrain.
So that's the answer, you can indeed have "many, many" Terrain in a Unity project, you do indeed essentially just "sit them next to each other". In practice it's not achievable unless you use one of the editor tools such as TerrainFormer.
The proper way to do this would have been with procedural mesh generation with MeshFilter and Mesh API but you mentioned that this not at all random or generative so that one is eliminated.
It's just simply a very long, thin, hand-made environment
The best way in this case would be a Modular Level Design. You need to create Modular Assets. With this you can have a long road in pieces which can be tileable. A good modeling artist should be able to create and texture modular assets with 3D modeling packages like Blender, Maya or 3ds Max.
All the programmer has to do is make each asset a prefab then use the Instantiate function to instantiate them to create any distance of environment. You can also create a static environment in the scene from the Editor. Almost anything can be made into a modular Asset especially buildings and roads.
After assembling them in Unity, you can do static batch on all the instantiated modular parts with Unity's StaticBatchingUtility.Combine to improve performance of the game since they are not being moved.
Below is an example of a modular road asset that can be used to create almost any amount of road:
You already answered your question
in this case would it be better to not bother with Unity's otherwise excellent Terrain, and the modellers would just outright build the long course/scenery? (Obviously you'd have to chunk it so it all occludes fine)
I think it's the way to go. If the performance is an issue, try putting each chunk in different scenes and then have a master scene to load them async and additively. And of course you want to unload each scene as it becomes invisible in the camera.
I personally go with your own solution which is letting the Unity Occlusion Culling system to take care of the hiding and showing chunks. I only go with the separate scenes approach if I'm not getting enough performance this way.
I recently had the same problem. We build a tilebased infinite runner with road crossings. The camera was positioned behind and over the car looking down on the street and the player car. So the setup is quite comparable.
We used Curvy from the Asset store to create paths for moving the player and also for creating the geometry of the streets and the surroundings among the streets.
https://assetstore.unity.com/packages/tools/level-design/curvy-splines-7038
It is also easy to spawn tiles with curvy paths and combine them at runtime. So you can separate long distances into smaller segments and spawn them randomly.
We also used QuickBrush from ProCore to quickly paint environment detail to the geometry like trees, bushes or stones. I think procore tools are now implemented in the new Unity 2018 version.
Worked quite well.
I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.
I'm starting to use Maya to do some bone animations of a 2D character, which is composed of several different parts (legs, body, head, weapon, etc.). I have each part in a separate .PNG image file. Right now I have a polygon for each part, with its own material and texture:
I was wondering if there's a way to automatically combine the textures into an atlas, make all the polygons use the same material with the atlas, and correct their UV maps so they still point to the right part of the atlas. Right now, I can do it manually in reverse: I can make the atlas outside Maya with other tools, then use the atlas on a material and manually correct the UV maps of each polygon. But it's a very long process and if I need to change a texture, I have to do it all over again. So I was wondering if there's a way to automate it.
The reason why I'm trying to do this is to save draw calls in Unity. From what I understand, Unity can batch objects as long as they share the same material. So instead of having a draw call for each polygon in the character, I'd like to have a single draw call for the whole character. I'm pretty new to Maya, so any help would be greatly appreciated!
If you want to do the atlassing in Maya, you can do it by duplicating your mesh and using the Transfer Maps tool to bake all of the different meshes onto the duplicate as a single model. The steps would be:
1) Duplicate the mesh
2) Use UV layout to make sure that the duplicated mesh has no overlapping UVs (or only has them where appropriate, like mirrored pieces.
3) Use the Transfer Maps... tool to project the original mesh onto the new one, using the "Use topology" option to ensure that the projection is clean.
The end result should be that the new model has the same geometry and appearance as the original, but with all of it's textures combined onto a single sheet attached to a single material.
The limitation of this method is that some kinds of mesh (particularly meshes that self-intersect) may not project properly, leaving you to manually touch up the atlassed texture. As with any atlassing solution you will probably see some softening in the textures, since the atlas texture is not a pixel for pixel copy but rather a a projection, and thus a resampling.
You may find it more flexible to reprocess the character in Unity with a script or assetPostprocessor. Unity has a native texture atlassing function, documented here. Unity comes with a script for combining static meshes, but you'd need to implement your own; theres'a an example on the unity wiki that probably does what you want : http://wiki.unity3d.com/index.php?title=SkinnedMeshCombiner (Caveat: we do something similar to this at work, but I can't share it; I have not used the one in this link). FWIW Unity's native atlassing works only in rectangles, so it's not as memory efficient as something you could do for yourself.