Best practices for implementing levels in cocos2d games - iphone

I'm making a simple cocos2d adventure game, but have no clue how to implement any sort of levels. I've searched for tutorials, but can't find any.
Is there anything I can use to figure out levels in cocos2D?
Thanks

There are so many ways to implement levels in a cocos2d game. I think a straightforward way is to:
Modeling your levels first. Decide what should be stored in a level's data model. I think typically you will have at least two kinds of data:
Player data (Run-time generated, e.g. score, character's current location, etc.)
Level data (e.g. what's on the screen in this level, the rule to pass this level, etc.) This data could be either fixed or dynamic. If the levels are designed by developer, like Angry Birds, you can store this part of data in external configuration files and load them on demand; if the levels are dynamically generated according to some rules, then the rules should be stored in the data model.)
Design a general game-play layer which can be initialized according to an instance of the data model above. The layer class controls the presentation of the level, and is responsible for user input handling.
If your levels shares some global data, you can make another shared data model to manage these things (e.g. total score, achievements, player's name, etc.). Create a shared instance of this class and manage the data in it via your game-play layer.
You could also consider more advanced way like using scripts (such as Lua) to implement the levels.

You mentioned not being about to find any tutorials. I agree that finding free online tutorials for cocos2d can be challenging. I ran into the same problem when I started learning it. I recommend grabbing a book on cocos2d such as Learning cocos2d. There is so much to the API that you will have a very hard time creating even a rudimentary game without any tutorials or guidance, unless you have a lot of prior programming experience.

Related

What is the proper workflow for creating city landscapes for use with RealityKit

I'm learning up on RealityKit and trying to create a city landscape.
Watched this video from Apple and downloaded the associated project talking about RealityComposer
https://developer.apple.com/videos/play/wwdc2019/605
My initial goal is to create a city street with tall buildings and a controllable character which can walk around the streets and perform tasks (character controlled by the user)
I've played with RealityComposer but it doesn't seem like the tool for creating complex landscapes or characters for this use case (I could be wrong). seems more of a prototype tool for fast POC
I'm assuming that there are tools such as sketch and open usdz files (tried googling and searching but nothing substantial came up)
What is the appropriate workflow for this type of app (game) development?
I would recommend one of two options:
A. Programmatically add and control models within the AR View. This will require a decent knowledge of Swift and a lot of looking around for examples and reading the docs for RealityKit.
B. Switch over to Unity. Unity would be a lot easier to work with and is designed for games (Which is what it sounds like you want to do). Bonus is your game/app will be cross platform.

HOW do Unity World Anchors work on the HoloLens?

I'm currently building a HoloLens application and have a feature in-mind that requires holograms to be dynamically created, placed, and to persist between sessions. Those holograms don't need to be shared between devices.
I've had a nightmare trying to find (working) implementations and documentation for Unity WorldAnchors, with Azure Spatial Anchors seeming to stomp out most traces of it. Thankfully I've gotten past that and have managed to implement WorldAnchors by using the older HoloToolkit, since documentation for WorldAnchors in the newer MRTK also seems to have also disappeared.
MY QUESTION (because I am unable to find any docs for it) is how do WorldAnchors work?
I'd hazard a guess that it's based on spatial mapping, which presents the limitation that if you have 2 identical rooms or objects that move in the original room, the anchor/s is/are going to be lost.
What I'd LIKE to hear is that it's some magical management of transforms, which means my app has an understanding of its change in real-world location between uses even if the app is launched from a different location each time.
Does anybody know the answer or where I might look (beyond the limited Unity and MS Docs for this matter) to find out implementation details?
Thank you.
I'd hazard a guess that it's based on spatial mapping, which presents the limitation that if you have 2 identical rooms or objects that move in the original room, the anchor/s is/are going to be lost.
We won’t divulge the internal implementation details of the internal coding of the World Anchor but we can state that it is not based on GPS currently with HoloLens v1 or HoloLens v2. Currently, the World Anchor uses the data in the spatial map for placement. The underlying piece that is key is the anchors rely on the spatial scanning and the scanning can use wifi to improve the speed and accuracy, see these two references: 1 & 2
What I'd LIKE to hear is that it's some magical management of transforms, which means my app has an understanding of its change in real-world location between uses even if the app is launched from a different location each time.
It is certainly possible to have two identical rooms with exact layout to trick the mapping to think it is the same room. We document that here:
https://learn.microsoft.com/en-us/windows/mixed-reality/coordinate-systems#headset-tracks-incorrectly-due-to-identical-spaces-in-an-environment

Unity3D GameObject Code Structure

I am messing around in a Unity3D, making a 2D project. I want to create my own code architecture for Unity's component based system.
As I don't want to create God-Controller scripts, and being more into code resposibilities separation solutions ( having MVC, MVVM in mind ), I am trying to find some good solution.
My first take looks like this:
GameObject is created from:
Unity Components - for ex. SpriteRenderer, Animator, Rigidbody2D
Controller - The only resposibility of this component is to handle Unity functions ( like
Update, FixedUpdate, OnCollision ), and executes functions from model.
Models|Proxies - this components contains data, functions to manipulate game object unity components, and dispatching events to outer world.
I am wondering what do you think about this aproach, what are your code habbits in Unity3D projects, and what solutions worked for you.
While I have learned and taught MVC and similar approaches, I find that when designing game architectures one usually has to be a bit more flexible. In unity I generally take the following approach.
I will create a few GameObjects to hold the necessary global logic. This would be things like the overarching state machine, networking, and sometimes control input. If anything needs to persist between scenes it will go here. Each object typically has one component script for game logic and one for temp/debugging functions that gets turned off or removed when not needed.
If the project has fixed levels I will make each level a scene and I will store level layout and other level specific information in the scene. If I am doing a more procedural project I will create a "LevelGenerator" object with component scripts that build and populate the level at runtime.
If I am building a system that has lots of mostly independent agents (e.g. enemy creatures) I try to keep the game logic and necessary state information for each agent as close to it in the hierarchy as possible. For example, the agent's position and rotation would be stored in it's transform. I might store the agents health, ammunition, speed, and current status effects along with the functions for moving, shooting, healing, and death in a component script on the agent's GameObject.
While there are countless other approaches that could work, I like this approach for a few reasons:
It saves me from having to manually manage tons of data access in a central script. If I need to know where all the monsters are, I can just keep a list of game objects rather than using custom data types.
When the agent gets destroyed all the local data goes with it. (No complex functions to clean up dead agents.)
From a game logic perspective (on the projects I typically work on) it usually makes sense that each agent would "know" about itself and not about everyone else.
I can still use all the OO goodies like polymorphism etc. when necessary.
This response likely says more about how I approach game design and software architecture than general best practices but it might be useful.
One note on encapsulation in Unity. Every component script you add to a game object has a bit of overhead. If your scene has a couple of dozen agents in it, than this is not a big deal and I would recommend trying to keep things as OO and modular as possible. If you are building a system with hundreds or thousands of active agents, cutting the components per agent from two to one can mean quite a bit of saved frame time.
I use another approach.
I don't use many controllers attached to game objects. I just have some kind of GameController which creates other structures.
I have separate project shared between other games. This project contain design patterns and is built before main project did. I widely use State, Observer, Builder, ObjectPool etc. patterns to make my code clear and simple.
Another reason I use such approach is performance optimization. I create objects once and then reuse them. Also I do once such things as gameObject.GetComponent etc. When I need to create many objects using the same prefab I use ObjectPool to avoid CreateInstance/Destroy.
My logical game objects (actors) communicate each other using Observer pattern. My only one GameController just send events like Awake, Update to actors. Some objects have StateController which retranslate events like Awake and Update to current object state. This is useful to separate behavior of each object state.
I have component system architecture similar to Unity. And I also have services like InputService that can be accessed via ServiceLocator in any object.
I have also points, but the idea is clear and easy maintainable code. This is difficult with standard Unity controllers and SendMessage approach.

OpenGL ES and real world development

I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too.
But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult.
How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features?
Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow)
Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically.
I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on?
Thanks for any help you can give, I appreciate all types of response big or small :)
Cheers,
Mark
Trying to explain why the answer to this question always will be vague.
OpenGLES is very low level. Its all about pushing triangles to the screen and filling pixels and nothing else basicly.
What you need to create a game is, as you've realised, a lot of code for managing assets, loading objects and worlds, managing animations, textures, sound, maybe network, physics, etc.
These parts is the "game engine".
Development firms have their own preferences. Some buy their game engine, other like to develop their own. Most use some combination of bought tech, open source and inhouse built tech and tools. There are many engines on the market, and everyone have their own opinion on which is best...
Workflow and tools used vary a lot from large firms with strict roles and big budgets to small indie teams of a couple of guys and gals that do whatever is needed to get the game done :-)
For the hobbyist, and indie dev, there are several cheap and open source engines you can use of different maturity, and amount of documentation/support. Same there, you have to look around until you find one you like.
on top of the game engine, you write your game code that uses the game engine (and any other libraries you might need) to create whatever game it is you want to make.
something many people are surprised with when starting OpenGL development is that there's no such thing as a "OpenGL file format" for models, let alone animated ones. (DirectX for example comes with a .x file format supported right away). This is because OpenGL acts somewhat at a lower level. Of course, as tm1rbrt mentioned, there are plenty of libraries available. You can easily create your own file format though if you only need geometry. Things get more complex when you want to take also animation and shading into account. Take a look at Collada for that sort of things.
again, animation can be done in several ways. Characters are often animated with skeletal animation. Have a look at the cal3d library as a starting point for this.
you definitely want to spend some time creating a good pipeline for your content creation. Artist must have a set of tools to create their models and animations and to test them in the game engine. Artist must also be instructed about the limits of the engine, both in terms of polygons and of shading. Sometimes complex custom editors are coded to create levels, worlds, etc. in a way compatible with your specific needs.
Write or use a model loading library. Or use an existing graphics library; this will have routines to load models/textures already.
Animating models is done with bones in the 3d model editor. Graphics library will take care of moving the vertices etc for you.
No, artists create art and programmers create engines.
This is a link to my favourite graphics engine.
Hope that helps

OpenGL render state management

I'm currently working on a small iPhone game, and am porting the 3d engine I've started to develop for the Mac to the iPhone. This is all going very well, and all functionality of the Mac engine is now present on the iPhone. The engine was by no means finished, but now at least I have basic resource management, a scene graph and a construction to easily animate and move objects around.
A screenshot of what I have now: http://emle.nl/forumpics/site/planes_grid.png. The little plane is a test object I've made several years ago for a game I was making then. It's not related to the game I'm developing now, but the 3d engine and its facilities are, of course.
Now, I've come to the topic of materials, the description of which textures, lights, etc belong to a renderable object. This means a lot of OpenGL clientstate and glEnable/glDisable calls for every object. What way would you suggest to minimise these state changes?
Currently I'm sorting by material, since objects with the same material don't need any changes at all. I've created a class called RenderState that caches the current OpenGL state and only applies the members that are different when a different material is selected. Is this a workable solution, or will it grow beyond control when the engine matures and more and more state needs to be cached?
A bit of advice. Just write the code you need for your game. Don't spend time writing a generalised rendering engine because it's more than likely you won't need it. If you end writing another game then extract the useful bits out into an engine at that point. This will be way quicker.
If the number of states in OpenGL ES as high as the standard version, it will be difficult to manage at some point.
Also, if you really want to minimize state changes you might need some kind of state-sorting concept, so that drawables with similar states are rendered together w/o needing a lot of glEnable/glDisable's between them. However, this might be sort of difficult to manage even on the PC hardware (imagine state-sorting thousands of drawables) and blindly changing the state might actually be cheaper, depending on the OpenGL implementation.
For a comparison, here's the approach taken by OpenSceneGraph:
Basically, every node in the scene graph has its own stateset which stores the material properties, states etc. The nice thing is that statesets can be shared by multiple nodes. This way, the rendering backend can just sort the drawables with respect to their stateset pointers (not the contents of the stateset!) and render nodes with same stateset together. This offers a nice trade-off since the backend is not bothered with managing individual opengl states, yet can achieve nearly minimal state changing, if the scenegraph is generated accordingly.
What I suggest, in your case is that you should do a lot of testing before sticking with a solution. Whatever you do, I'm sure that you will need some kind of abstraction to OpenGL states.