For example, I want to design a gameplay which need user to blow the microphone and the microphone can identify this behaviour.
And because I'm a newbie in developer, I can only understand swift rather then Object C, a complete code example is the best I think.
I know it's unprofessional to ask a question like this , straightforward and do not make any effort, if make anyone feel like that, I'm apologise first.
Related
I have a completed Sprite Kit game that is solid on it's own, However I would really like to incorporate real-time multiplayer functionality into it. The only problem is I have not been able to find any tutorials going over how to do so (raywenderlich.com has one but it is with Objective-C and my game is in Swift).
I have read through Apple's documentation, however, it really just covers the logic of what is happening and lists pieces that are used as opposed to actually showing how to implement the code.
I was wondering if someone could help me with how to actually go about implementing the code. From what I can tell through my searches it is a pretty requested topic but there aren't any tutorial on this using swift.
I was going to create an app where user can import a 3D asset, say a house, and then paint a texture over it in unity.
I then remembered that as my newbee knowledge confirms, we can only mark an object static in unity IDE before compiling, for light calculations.
I could not figure out even how to google this, to find possible answers for such a use case: how I could deal with light for such an static object?
Until now only thing comes to my mind: to earn 100k$ from my game and have payed Unity, so that to have reasonable realtime light calculations, but my newbee games will not do that.
Maybe there is another way for that or even there are already existing tutorials just teaching to make such a trivial app. If you know please give me a link or a know how.
Apologies if this question has been asked before, and apologies too if it is obvious to those with knowledge - I'm completely tech illiterate especially when it comes to gaming etc so bear with me!
I'm wondering whether it is possible to record gameplay (any console/platform) but be able to play this back in a 360/VR format?
The usecase is this:
I want to watch and follow a game but rather than having 1st person PoV, I'd love to be able to use either a VR headset device (most ideal) or a 360 viewer (tablet or smartphone) to move perspective beyond forward facing field of vision.
Ideally the PoV would follow players (think specatator mode) and not necessarily be a static camera - although not necessarily a deal breaker.
Is this possible?
How would this be done with existing tools etc or would new tools need to be developed?
Would it be 'recorded' client side or serverside - and would this matter?
Huge thanks in advance - also very very happy to be pointed in the direction of sources of info around this subject to consume if readily available.
Thanks
S
You need to connect the gameobject(character) in your game that has the camera to your VR display (wherever you are coding the display at) and write a code that takes the image that it displaces in that camera under that gameobject and make it so it is continuously updating, making it seem like you are in the game.
look here http://docs.unity3d.com/Manual/VROverview.html
I am trying to learn and build talking puppet iPhone application. The great example is "Talking Ben the Dog" and here is youtube video. I have no idea how am I going to build such application. I have a graphics designer who will do their part. As a being programmer, what would I need to be aware of? If someone can throw their ideas or point me some relavant documentation or sample code would be great help.
Thanks.
First, you'll need to create the content. That means the animation scenes and any associated audio. Next, you'll want to trigger those scenes based upon the user's input.
If you want more advanced functionality like "talk back" where the app repeats what you say, then you'll need to get a grip with AudioQueue and AudioUnit APIs. That means detecting levels of incoming audio then triggering writing audio in to stored buffers. These APIs are difficult so this will be the most technically challenging part. You'll need to be comfortable with pointers and other lower level programming concepts.
For an app without talk back, a lot of work will be required to create the content. Then you'll need to re-create the animations using UIImage and the Core Animation framework in your app.
There are a lot of great videos on the Apple site and sample code. This will be a brilliant learning curve for you to get up to speed with Core Animation.
Just make a couple of videos for every scene and play them according to button click!
I don't want a debate... I just want examples because I thought the stuff Apple gives you is already pretty good. Are there any particular reasons that Cocos2d is better for game development?
All but the most simple games will most likely want to take advantage of OpenGL. Using the inbuilt frameworks for animations & graphics will be horribly slow unless you are only doing something simple like a puzzle game. The downside of OpenGL is that for beginners it is quite difficult, and the general feeling is that you require a lot of code in order to do even simple things (eg. display a graphic onscreen). You can think of Cocos2D as a friendly interface to OpenGL. It provides all (most) of the power, but it's very easy to get started and continue working with.
Definitely recommend if you want to get into games to consider the Cocos2D framework.