Unity 5 - 2D game - separate scenes vs panels [closed] - unity3d

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I try read all post like
http://forum.unity3d.com/threads/scenes-vs-canvases-vs-panels.279890/
http://forum.unity3d.com/threads/with-new-ui-separate-scenes-or-just-separate-panels.281013/
and others and still don't understand what is better way?
Scenes vs panels.
I'm personally thing scenes is better - I feel separate levels.
A game i try to write familiar to candy crush and how i understand they use scenes - because from map to play new matches gam you see loading screen - i thing this is some scene.
Thank you

because from map to play new matches gam you see loading screen
Because you see a loading screen does not mean that the game is actually laoding a scene. Anyone can put a loading screen up while doing something else such as loading AssetBundles and not actually loading the scene.
Your question will only result to opinion answer since you don't really have any problem.
I have my own rules I use after experimenting with Unity.
If you are making a 3D game with color map, normal map, height map and other textures, use Scene. Separate the scenes to speed up loading each level.
Now, if you are making a 2D Game as simple as candy Crush that uses few Sprites and Images simply use Canvas/Panels. Make Level 1 as parent Canvas GameObject. For level 2, simply duplicate the Level 1 parent Canvas, rename it to Level 2 then make a little changes to it. You can switch levels by enabling and disabling the parent Canvas GameObjects.
This make modifying your 2D Game very easy.You can easily compare what two levels look like by enabling/disabling them. You don't have to load up another scene to get another level. Also loading scene takes time too but this eliminates that.
Another advantage of this is that you can always convert the Canvas parent of each level into a prefab then import it into another level if you want to start using scenes instead.

Related

How can I read through multiple layers of a tilemap to determine what tiles exist at a clicked on position in a script for unity?

Whats going on is, I want to detect what the tile I'm clicking on is, but I'm unsure of how I can do that if my tilemap consists of multiple layers. For example, with the way my script is currently set up, the ground level 'island' can be passed into the script as the 'map' variable, but then I wont be able to see if I am clicking on the house, which is in a separate layer. I'm new to Unity so I apologize if I'm explaining it badly, but basically I need a way to look through multiple layers of the tilemap to see what is being clicked on. In the future I would want to implement some sort of system in which a tile could have some sort of modifier sprite on top of it in a higher layer, so I would want to see the tiles in both layers, another reason I'm wondering if there's a way to cycle through those tiles.

How to prepare my game for VR? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Let's imagine we have some C++ OpenGL game. It uses our own engine for render (nor Unity, nor UE, etc). Let's simplify our problem.
For example, we need to render some simple cube in VR mode. What we should do for that?
I know we need to split our screen into two parts. But what then? How to calculate rotation, distance for both parts?
By VR I mean devices such as VR Box, Gear VR, Oculus Rift, etc.
All the major headsets have API documentation on their sites that explain how to integrate VR support into your engine. You should refer to that documentation for details. My experience is mostly with the Oculus SDK but other SDKs are similar.
You generally don't directly split the screen into two yourself - you provide images with left and right eye views to the SDK and the SDK performs warping for the lens optics and sends the outputs to the HMD display(s).
The SDK provides APIs to get the camera and viewport parameters you need to render each eye's view. With the Oculus SDK you also obtain your render targets for each eye view through API calls. You build view and projection matrices and set viewports for each eye view based on the information provided to you by the APIs for the HMD position, orientation, Field of View, target resolution, etc.
Rendering for each eye is essentially the same as whatever you are already doing in your engine but of course you have to render twice (once for each eye) using the camera and viewport information provided by the SDK and may wish to render a third view for display on the regular monitor. You may want to restructure parts of your engine for efficiency since the left and right eye views are very similar rather than naively render the entire scene twice but that is not strictly necessary.
There will probably be a call at the end of a frame to tell the SDK you've finished rendering and submit the completed eye buffers for display. Other than that there's not that much to it. Most of the challenge of VR rendering lies in achieving the required performance not in the integration of the SDKs which are fairly simple on the display side of things.

How to mantain informations between scene transition? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am working on temple run like game using this Kit. https://www.assetstore.unity3d.com/#/content/3292. I want to insert a room with two doors , when my player enters in the room it stops running and user can control with arrow keys , and when it leaves the room by back door it starts running again . what should i do when it collides with the door collides ?
I am doing this by replacing the scenes. I am instantiating an empty Game-object prefab in GamePlayScene when player is colliding with that i am loading HouseScene and when it is colliding with back door (in HouseScene) i am loading GamePlayScene. but the game is starting from the beginning. how can I resume the game From where I left And keep the track of Distance covered and coins collected? And also for HouseScene . Remember the points i achieved in it. Thanks.
You need to save the informations that you want to keep between your scenes.
You have some possibilities about that:
1) Save your informations in a text file and retrieve those when you load a new scene (but this way is a bit "dirty" and not recommended...the next points of this list are better solutions);
2) Using PlayerPrefs. They provide getters and setters to retrieve/store data in/from the registry of your operative system.
3) Using an object (as a container) that contains all your "global" variables (the ones that you want to mantain between scenes' transition), and invoking the DontDestroyOnLoad function on it. That way, your "container" (with its data) will persist in the entire life cycle of the game.

Unity3D Applying Texture - Model turns black [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am following a course to learn Unity3D.
Game Development for iPhone/iPad Using Unity iPhone Tutorials
http://www.vtc.com/products/Game-Development-for-iPhone-iPad-Using-Unity-iPhone-Tutorials.htm
I am following along whatever author is doing on the screen. He is using Unity 1.6 and I am using Unity 3.40f5
When I try to apply the texture as he does in the movie, my model turns black. Is there something trivial that I am missing here?
Also find the screenshot attached.
Whats happening in the movie with author -
Whats happening on my screen -
It's hard to tell from screen grabs but your material looks correct, assuming it says "Bumped Diffuse" after shader, I can't tell.
When you first drag your model the scene, before applying a texture, it should self shade. If it doesn't, you need to regenerate your model's normals by clicking on the model and then in the inspector look for "Normals and Tangents." After Normals choose "Calculate" the click "Apply" at the bottom, see what happens. I don't know your model type but Unity has given me trouble in the past with Wavefront .obj files predefining their normals.
The other possible issue is a faulty UV import. If the tutorial is from v1.6 it is possible the model included with the tutorial isn't importing correctly. I've had a similar issue where the UV's were all set to '0 0' so only the lowest corner pixel of my texture was used. Unity can't do anything for you there. You can test this by creating a new material. Set the shader to diffuse. Set 'Base(RGB)' texture to 'none', and set 'Main Color' to something like blue. Apply it to your model. If you don't define a texture then your model should appear blue. If it does, it means you likely have a UV import problem.
You just create the material as well but I think you did not import it to the object.

Best process to show an OpenGL Animation in iPhone [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I'm having issues being able to import a bone and skeletal animation from Maya to Blender to iPhone. Here's what I've done:
install the ColladaMaya plugin to export a DAE for Blender to export
used Jeff LeMarche's script to be able to export a single keyframe of the model and import this .h file into the iPhone game
Setup the GLView using more of Jeff LeMarche's steps and rendered into our Game, so this model display next to the actual game (not in 3D).
Researched oolongngine, sio2 (applied but haven't yet gotten e-mail back from them), other SO q's for solutions, including mine from game dev
Reviewed using FBX SDK content pipeline to dynamically generate class files for the animations.
I can import a model and display it. A lot of these processes respond to that issue and leaving the developer to manipulate the game object programmatically.
My main issue is finding the best, defined process of importing an animation into iphone next to the existing game? I don't need a whole game, or a whole scene, just one animating model and some steps to follow.
This animation is meant to play in a loop. There are 3 more animations that will play on different game states (good move, bad move, etc.). So I'm concerned that LeMarche's keyframe solution (which basically means exporting EVERY keyframe as a .h file) will be incredibly time-intensive and memory-intensive. I'm definitely willing to do it, but after all the research I've done (additional links not included), I'm lost as to where to go next besides hand-exporting each keyframe and importing them.
EDIT:
I've added a bounty to this for anyone who can give me a clearly-defined process for importing an animation from a 3D application into iPhone. NOT the entire application itself (i.e. Unity, Sio2, etc.), but just showing a 3D overlay in an application (like an animating model next to a bejeweled-esqe game, not interacting with the world.)
People keep saying, "create your own model loader". Are there scripts, examples, tutorials, anything that walks through this "model loader" process from EXPORTING from the 3D application (preferably Maya or Blender) and IMPORTING this animation and rendering it in Objective-C?
This is really a big problem with animation export. I had this problem recently and ended up with Assimp.
However it also has problems with skeletal animation exported from Maya and Blender. As for me I prefer 3Ds Max (don't forget to reset Xform before rigging), it has no problems with Collada and animation.
Though if you want to use that models in your game I suggest you to write your custom exporter for Maya or Blender. Also try the mesh (morph) animation. If you don't use inverse kinematics or something like that this is what you need.
I have written code that reads Blender files and parses the mesh, bone and animation data. It then applies the bone animations to the meshes world transforms etc. It's a lot of work, but certainly doable. If you have any questions about Blender peculiars, just ask me.
The other option is using a library like OgreKit, which can read blend files and does skeletal animation for you as well out of the box.
AnimKit is a small animation library that can read a .blend file and play an animation using Glut. It extracts all data for skeletal animation and more, including bones, mesh and animation channels etc from a Blender 2.5 .blend file.
AnimKit is open source (using the permissive zlib license) an you can check it out using Subversion:
svn co http://code.google.com/p/gamekit/source/browse/#svn%2Fbranches%2FAnimKit AnimKit
AnimKit doesn't run on iPhone yet, but I'll port it soon, using Oolong Engine.
Another interesting resource is Proton SDK (protonsdk.com) it has a customized Irrlicht SDK that runs on iPhone/Android etc. The RT3DApp sample can play an animation.