I want to make a main Gameobject in Unity3D that would act as a command center for anything related to controls so that any component in the game that needs to know if the player is trying to go right would get a notification or a state change and act accordingly. I am kind of new to programming and scripting in Unity, so I know this is possible, but I don't know how to do it specifically with the engine.
A few concepts I came across are:
state changing techniques
Singleton God classes
Parent GameObject with children that send messages upwards and receive them downwards
Inheritance of abstract classes that act as interfaces and get implemented into some collection and iterated through to get the message called on
If you guys have any insights on this subject that would be great. I basically want to find out what's a good approach for this problem, how to decide on a good one (Pros vs. Cons) or if I got it all wrong. :)
All I am looking for is an implementation example of your method of choice that worked for this purpose. No need to go into each separate method. Maybe a brief description of why you decided on your pattern would be great!
Thanks in advance!
Well, you may use a C# events:
1) Create your CommandCenter class (MonoBehaviour), implement there some events you need to share, and make some public methods to call tham.
2) whenever you handle a button player press, call the method in CommandCenter, that calls an event (or, you may just make events public and call tham directly)
3) In each component, witch should know about some event, in Awake method, do the following:
var commandCenter = FindObjectOfType(CommandCenter);
commandCenter.MyEvent += localMyEventHandler;
4) don't forget to unsubscribe for events on destroy:
OnDestroy ()
{
var commandCenter = FindObjectOfType(CommandCenter);
commandCenter.MyEvent -= localMyEventHandler;
}
5) So, in localMyEventHandler method you can handle this event.
Related
For my undergrad final project I want to develop an educational game for teaching basic programming so I want to provide them some easy drag'n'drop visual programming editor like in this code it but i have no idea how to do this i'm new in unity and i did a lot of search in google but i didn't get it ( i'm quite loste ) .so please can any one help me in these and give me a clue so i can build on it. thank you for your help
this is a example for my game design expl ( i want to move the player by drag and drop move right , move up ,move forward ......). I home my idea and question are clear
A few months ago I developed a project very similar to yours. I recently extrapolated a library from this project and published it on github. the library is called blockly-gamepad 🎮 and allows you to create the structure of a game with blockly and to interact with it using methods such as play() or pause().
I believe this will greatly simplify the interaction between blockly and unity, if you are interested in the documentation you can find the live demo of a game.
blockly-gamepad 🎮
live demo
Here is a gif of the demo.
How it works
This is a different and simplified approach compared to the normal use of blockly.
At first you have to define the blocks (see how to define them in the documentation). You don't have to define any code generator, all that concerns the generation of code is carried out by the library.
Each block generate a request.
// the request
{ method: 'TURN', args: ['RIGHT'] }
When a block is executed the corresponding request is passed to your game.
class Game{
manageRequests(request){
// requests are passed here
if(request.method == 'TURN')
// animate your sprite
turn(request.args)
}
}
You can use promises to manage asynchronous animations.
class Game{
async manageRequests(request){
if(request.method == 'TURN')
await turn(request.args)
}
}
The link between the blocks and your game is managed by the gamepad.
let gamepad = new Blockly.Gamepad(),
game = new Game()
// requests will be passed here
gamepad.setGame(game, game.manageRequest)
The gamepad provides some methods to manage the blocks execution and consequently the requests generation.
// load the code from the blocks in the workspace
gamepad.load()
// reset the code loaded previously
gamepad.reset()
// the blocks are executed one after the other
gamepad.play()
// play in reverse
gamepad.play(true)
// the blocks execution is paused
gamepad.pause()
// toggle play
gamepad.togglePlay()
// load the next request
gamepad.forward()
// load the prior request
gamepad.backward()
// use a block as a breakpoint and play until it is reached
gamepad.debug(id)
You can read the full documentation here.
I hope I was helpful and good luck with the project!.
EDIT: I updated the name of the library, now it is called blockly-gamepad.
Your game looks cool!
For code-it, we are using Blockly as the code editor. The code is then executed by a Lua interpreter inside the game. You could do something simpler: make one block type for each action, like MoveForward etc. and have them generate function calls like SendMessage('gameInterfaceObject','MoveForward'). In the game you need an Object that listens for such messages from the website.
Here's how your game talks to the website:
https://docs.unity3d.com/Manual/webgl-interactingwithbrowserscripting.html
I'm just starting with Unity and got pretty excited when I saw that the Event System existed, and I could create custom events. The event I need is 'IInventoryMessage::NewItemInInventory', so I went ahead and created the interface for that, set it up on my Inventory.
Then it came time to trigger the event, and the documentation threw me a little.
ExecuteEvents.Execute<IInventoryMessage>(target, null, (x,y)=>x.NewItemInInventory());
My confusion is that it seems to be passing in the target.
My hope was the Unity would keep track of all the components with the message's interface and call that when it was executed. But it seems I have to pass in the GameObject myself.
Is it the case that I'm supposed to keep a list of all the GameObjects I want to receive the message, and the loop over them to pass them into Execute? Why do I need the EventSystem at that point, if I'm already looping over the objects I know need to be called?
I use ExecuteEvents only inside my custom input system where the target it's always known and up to date (according to the pointer raycast). Whenever I want to send a message or trigger an action when something happened, I use the standard C# events, as BugFinder said.
I have background in Java but havn't been coding in years. Lately I've taken interest to warm my coding skills again and have chosen to create learning apps for my kids in Swift. I've created basic Swift game utilizing the Sprite Kit with gameviewcontroller and multiple scenes. However I run into a basic question which is related to passing basic data such as points and lives counts from scenes to gameviewcontroller.
Back in the day, I would have done it by creating a static member that would hold the lives left and score count but how is it today in Swift? I undertand that IoC would be the more modern equilevant to static members but howabout Swift and howabout this case?
Are there IoC frameworks for Swift or StackOverFlow users' proposals for solution concept?
I would also use a separate GameState singleton class to manage all of that stuff, just to make sure there's no conflicts, and you can access the data from anywhere. Here's a really good tutorial for that, and it should be a cinch to update for swift: http://www.raywenderlich.com/63235/how-to-save-your-game-data-tutorial-part-1-of-2
To pass data between several classes your variable has to be global. You have to declare your variable out of classes.
For exemple :
var score:Int = 0
class GameScene {
//Your game here
}
Like this your will call score in all others class like your GameViewController
Currently, I'm building a simple game using pure UIKit. In my game content MVC, I have a CADisplayLink timer object that is activated on game and is repeatedly calling gameLogic method for displaying new objects. The method calculates and renews object positions on the screen. The frameInterval of my timers is 2 frames. Now, I need somehow to set an interval for the repeated appearance of new objects on the screen. The solution that I have used so far is to use a static counter in gameLogic method. Here is a fragment of the method:
-(void)gameLogic{
static int timeCounter = 0;
if ((timeCounter % 50)==0){
[self addItemToScreen];
}
...
timeCounter++;
At every 50th cycle of gameLogic method execution, I'm putting a new object on screen. The rest code of gameLogic method repositions the existing objects. Currently, everything is managed by only one timer. The other solution that IMHO should work is to have a separate timer for adding new objects on screen. But I'm not sure, is it a better solution, and will they work successfully concurrently. What are your opinions about these approaches? What other solutions you propose to use?
I think its fine to have all your objects respond to a frame counter. However, I would likely instead dispatch a notification across your application instead of having one function that has a ton of code in it, doing all kinds of things. I think you really should consider breaking up all that logic into separate classes, i.e. have a GameObjectCreator class that responds to the notification posted when your timer fires, and only have that class add objects to the view. For your code that rearranges the objects on screen, they could also respond to the same notification, and be either handled by a parent controller to manipulate all the objects, or each individual object could respond to the notification as well. Performance wise I'm not sure if using notifications is bad with a lot of objects responding at once, but something like this is more the approach I would take so you don't have all this spaghetti code in one single timer function.
UIView's that don't handle their events pass them up the chain. By default, this passes them to their parent View, and if not handled (ultimately) to their parent UIViewController.
UIScrollView breaks this (there's lots of questions on SO, variations on the theme of "why does my app stop working once I add a UIScrollView?)
UISV decides whether the event is for itself, and if not, it passes it DOWN (into its subviews); if they don't handle the event, UISV just throws it away. That's the bug.
In that case, it's supposed to throw them back up to its own parent view - and ultimately parent UIVC. AFAICT, this is why so many people get confused: it's not working as documented (NB: as views are documented; UISV simply is "undocumented" on this matter - it doesn't declare what it aims to do in this situation).
So ... is there an easy fix for this bug? Is there a category I could write that would fix UISV in general and avoid me having to create "fake" UIView subclasses who exist purely to capture events and hand them where they're supposed to go? (which makes for bug-prone code)
In particular, from Apple's docs:
If the time fires without a significant change in position, the scroll view sends tracking events to the touched subview of the content view. If the user then drags their finger far enough before the timer elapses, the scroll view cancels any tracking in the subview and performs the scrolling itself.
...if I could override that "if the timer fires" method, and implement it correctly, I believe I could fix all my UISV instances.
But:
- would apple consider this "using a private API" (their description of "private" is nonsensical in normal programming terms, and I can't understand what they do and don't mean by it)
- does anyone know what this method is, or a good way to go about finding it? (debugging the compiled ObjC classes to find the symbol names, perhaps?)
I've found a partial answer, that's correct, but not 100% useable :(.
iPhone OS 4.0 lets you remotely add listeners to a given view, via the UIGestureRecognizer class. That's great, and works neatly.
Only problem is ... it won't work on any 3.x iPhones and iPod Touches.
(but if you're targetting 4.0 and above, it's an easy way forwards)
EDIT:
On OS 3.x, I created a custom UIView subclass that has extra properties:
NSObject *objectToDelegateToOnTouch;
id touchSourceIdentifier;
Whenever a touch comes in, the view sends the touch message directly to the objectToDelegateToOnTouch, but with the extra parameter of the touchSourceIdentifier.
This way, whenever you get a touch, you know where it came from (you can use an object, or a string, or anything you want as the "identifier").