I'm kind of new to this whole thing but I'll get started by asking my first question ever.
So a friend and I are working on a project in which we need to track the player character's head movement and attach items to it. Now is this something that needs to be done in Java or would this be something that would be done in Unity?
Sorry if this is a stupid question but I've never really worked on a project of my own volition before.
Thanks,
BadAssWalrus
Ok, so first things first. There is no java in unity, you can do scripting in C#, UnityScript (essentially JavaScript) or Boo.
Now to answer your question, according to my understanding you want to move some objects as the head of the character moves. Now depending upon how you are really trying to move the objects you could either do it with scripting or simply by making the object you are trying to move a child of the head of the character.
Unity has excellent documentation about the scripting API and a lot of tutorials. I suggest you get started by looking at one of these tutorials, and you should get an idea of how things work pretty soon and then you should be able to do pretty much anything you want to do with relative ease.
Related
I know that is possible export Unity projects to WebGL, but is there a way to use code from a ImpactJS project on Unity?
I checked the web for converter and found none, so unless I missed something, what you are trying to do is impossible.
There is always a way to convert. Sometimes it may require some manual fixing. Generally most languages are similar enough that you can automate many parts. The art assets are often re-usable, for example.
No. Unity converts it's own framework and system of gameobjects and c# code that you write into WebGL code.
However, if you have computing and business code that does not do much with the canvas/UI then that code can be copied over somewhat. For code that is purely computational or business logic, you can use any online tool to convert javascript into C# code with some success.
My guess is that if most of your code dealt with the canvas and UI and interaction, that none of that code will be able to be reused.
We are developing a Top Down RPG Game. So, there's a question about NPCs. Making a lot of NPC FBX files would not be optimal. From what I know, people usually make 3-4 bodies and a bunch of heads, so the NPCs would look different. Do you guys have any idea, if there's an optimal way of doing that?
I tried modelling bodies and heads in separate files and then putting heads onto the empty object on top of the body (this empty object was part of the FBX itself). It didn't work, as the actual head always displayed too high above, so it looked like the head was hovering in the air.
I also thought about making heads a part of one big FBX with just one body. So all the head would be on the same coordinate in space, and so in Unity there would be a possibility of making active just one head in time. I still don't think it would be very optimal.
So, is there a way people usually do this? I mean, this must be really common. They used it in Gothic, for sure and in many other games. So there must be an optimal way, but I'm unable to find it.
There are a few character customization packages available on asset store that you could use. I recommend UMA2 (Unity Multipurpose Avatar). It is a perfect tool and I'm using it on one of my projects and it's really gets the job done. And the good thing is that it's free. Once you imported it into your project, you can either buy contents (cloth, hair, face, ...) or create your own contents like I do using blender. There are tutorials about how to integrate and create contents on Secret Anorak youtube channel.
My question revolves around how to get behaviours described by a node-based editor to map to actions in-game. I've searched for several weeks about how to do this but can't find anything. If anyone could point me in the right direction, I'd be really grateful!
I don't know the best way to ask this question - I don't want to overload the question with what I've done, so I'll elaborate a bit below, as best I can:
I currently have a custom node-based editor I wrote for Unity, which I use to graphically edit the behaviour of bosses and NPCs. It looks like this (with the right-click menu shown):
I did find a way to get this behaviour working in-game, but it feels wonky and I'm wondering if there's a better way.
The way it works is there are two objects to be serialized per node object. I currently have these setup as scriptable objects, though if I were to do this again I'd create my own kind of serialization. These two objects are:
The Node in the editor window, containing editor scripts
The associated in-game action, to be loaded by the NPC at runtime
This creates a bit of mess in the file structure, as well as the amount of script files I need. On top of this, it's a bit tricky to manage cases where multiple actions are being performed simultaneously, and I need to change the NPC's state (e.g. phase1 to phase2 of a boss fight), which sounds to me like a design issue.
I'd be happy to share source code or more screenshots, if at all necessary. What I'm looking for is a way to approach something like this in a structured way that allows scalability for other projects (e.g. NPC behaviour for an RTS)
I am looking for a solution to the effect used in Valve's portal games and seen in the "portalizer" unity package. I have searched almost every related question on many forums, however none seem to offer the help I need. I have tried re-working the mirror reflection script found on the wiki, however I have a lot of problems when they are out of the camera's view, and it never seems to work. I feel like I have spent hours on this, if you know anywhere which details the logic and explanation of this effect such as camera matrices and stuff it would be great, or just a working block of code for me to look through.
By the way i'm actually not hoping to explicitly use it for potals, Im attempting to make a room similar to Doctor Who's Tardis which is bigger on the inside, kind of mimicking the effect of Non-Euclidean space. Im also looking for something more advanced than a simple camera with render-to-texture. Im also using the pro version.
Thanks in advance.
I use Ray's link to learn Cocos2D, any other good links or tutorials I can use for developing?
Any suggestions about game developing?
Since you weren't too specific on what kind of links you wanted...
This is a bit philosophical but helped me stay focused on getting some simple games finished and polished rather than leaving them half done and moving to the next thing:
http://makegames.tumblr.com/post/1136623767/finishing-a-game
Here's a ton of links to all sorts of game related topics:
http://www-cs-students.stanford.edu/~amitp/gameprog.html
For cocos2d I'd suggest grabbing the full source code and opening up the cocos2d-ios workspace and then compiling and running all the test applications. They'll let you see a bunch of cocos2d's capabilities and give you a starting point to answer those "How would I do X..." type of questions. So after running the TileMapTest you'll know the different type of modes (ortho, iso, etc) it supports and know that there's sample code you look at to get it working.