Why do Unity game developers implement server by themselves instead of using unity built-in NetworkManager?
It seems like multi online games can easily be developed by using NetworkManager,
but why do developers implement server by themselves using node.js, ... etc??
Its my understanding that it is not possible for two separate unity projects to talk to each other using the Unity HLAPI (High Level API, which is what NetworkManager is utilizing).
That means that you would have to have all the server code, and all the client code, in the same project.
This doesnt have to be a problem really, but for larger scale projects, separating the server from client can be easier to work with, and you dont push out all the server code to the clients to possibly reverse engineer.
Lets say you have a round-based, non-realtime game like scrabble, in this case it would make sense to use node or any http protocol, where you have a centralized server with a database for persisting game states.
But i honestly have no clue why anyone would prefer pure sockets when you have the Unity TLAPI (Transport Layer API)
https://docs.unity3d.com/Manual/UNetUsingTransport.html
I hope this was somewhat helpful and not just a rant.
Related
I am not able to decide whether to use PUN or Bolt in my Unity based multiplayer game. The game must have LAN and over the internet playing options.
According to documentation on photon website, PUN is meant for multiplayer games over the internet. Master Server is hosted either in the photon cloud or dedicated servers running Photon Server.
Bolt, on the other hand, is meant for LAN games. One of the clients becomes the server.
My game needs both, LAN and Internet. Should I use both the SDKs? Can't there be common code for both options?
EDIT:
UNET is now depricated!
You can also go with the new network from Unity (UNet) It has both lan and Internet (if you portforward, but I think that there should be an option for that non the less. Quote me on this).
You however can take a look in the UNet manual to see if you like it.:
https://docs.unity3d.com/Manual/UNet.html
I am currently working on a 3v1 game with this. And it takes some time to understand, but you don't have to worry about payments or other things.
(I don't have 50 reputation so I can only give an answer)
Photon Support replied,
1] Should I use both the SDKs?
Combining PUN + Bolt does not make sense. Bolt works fine for LAN and online gameplay. Bolt is not based on UNET, it is written from the ground up. Photon Cloud and Dedicated servers can be used with Bolt as well. https://doc.photonengine.com/en-us/bolt/current/advanced-tutorials/headless-server
2] Should i write separate code for online and LAN feature?
No, this is not necessary - you "just" need to deal with the higher latency in online gameplay.
I would recomment you Photon. It's pretty easy to use and it's really cheap. Photon server are way MUCH cheaper than the UNET servers.
Both UNET and Photon have a great documentation, lots of tutorials and a big community, so you should't have a problem with that if you pick one or another :)
Bolt work with "internet" game as well if you prefer to differentiate games by internet and LAN.
However choose which one is highly depend on your game itself.
If the game required central secure server to handle logic process, you should go with Photon or similar approach.
If the game is p2p, fast faced and doesn't care much (not meaning doesn't care at all) to secure (avoid cheating, speed, wall hack for example) then you should go with Bolt.
Photon made a comparison here:
https://doc.photonengine.com/en-us/pun/current/reference/pun-vs-bolt
The major difference to me is the Host Migration, Bolt and UNet are not support Host Migration yet (UNet support LAN but not Internet)...
If your game is really fast pace (~30s per game) and don't care about Host Migration (Master client disconnected), then PUN or own dedicated server (which I wouldn't consider) are the only choices.
I have heard that if you want to do something well done do it yourself. This does not mean that PUN or Bolt or UNet do not work, on the contrary they serve perfectly for the scenarios that were created, but for your particular scenario they may not fit well. If you want LAN, Internet, Central Server or a client as a server, in addition to algorithms that avoid latency, such as interpolation and prediction you must do it from scratch, creating your own framework, for this use the Unity Networking API.
I have to make the online game as my project at the university.
The game must have
The server and the client of any turn-based game. Implementing the basic principles and rules of the game.
The server keeps a list of connected clients, runs the game, deals
processing and transfer of information. The application server with text-based interface
user concurrent operation. Client GUI
I would like to use Unity3D engine, but do not know if this is feasible.
How to make a console server for unity?
Feasible? yes
Easy, well, it depends of your aproach, but it is completly possible, there exist some people than do it with Node.js
But, if your thing is turn based, then maybie you can do an invention with a webserver (like uniserver for example, or Xamp, or some other Apache / mysql implementation) and, inside unity, use the WWWForm class.
https://docs.unity3d.com/ScriptReference/WWWForm.html
Is a little cheaty, but it's maybie the easiest way for testing purposes (in no way you must use it for production of a serious game if you have any decent, fast (for production) and secure aproach to the subject, which this is not.
Cheers
I am interested in developing a Client and a Server for VoIP and IM communication like mumble/teamspeak/skype/raidcall etc.
I would like to stay away from hardcore stuff, while my main goals regarding the client are:
Basic VoIP and Chat functionality
Custom emoticons in chat
A completely flat designed UI
Project lasts one month
So is this in your opinion a realistic and doable concept? If so which language do you think that would fit my above goals the most? And can you also point me to a certain direction? (Like should I use xmpp, should I find a completed Server and develop only the client etc).
I can code in C, C++ and Java at a university level.
Thanks in advance.
Maybe you should have a look at: www.freeradionetwork.eu
It's a prehistoric, but nevertheless interesting project. They published their protocol in hope for someone to write an API.
Do you want to stream through your server, or make it P2P?
Is there a distributed application framework (commercial is okay as well) that supports iPhone / iPad ?
What I'm looking for in the framework:
Allows me to focus on the application logic
I don't have to code "low-level" network programming (I've done it too many times that I dont wanna do it again =p)
Should be actively maintained (popular would be nice)
Basically, I can then develop faster.
We plan to develop a soft real-time TCP/IP client/server application where there are many iPhone/iPad clients (30+) connected to single server over LAN. The server most likely will run Windows (unless the framework does not support it).
I've been looking around and I see:
MonoTouch WCF (still looks quite raw ?)
RemObjects (Mono + Objective-C)
Cocoa Distributed Objects
ZeroC Ice Touch (Objective-C)
RakNet ( ? included because it mentions iPhone, but will need to use C++)
Of course, there's also the option of using the plain old MonoTouch System.Net.Sockets
Or, CFNetwork (I dont plan to use this one)
I'm still deciding whether to use Objective-C or MonoTouch, but leaning towards MonoTouch since we will get the .NET framework, and not be tied into just the Mac world.
Please feel free to comment if I added anything that's not related to my question---I'm new to iPhone/iPad world.
We've used WCF/Monotouch with great success - there are some areas of the f/work that arent 100% but for most cases you should find working with WCF on monotouch a breeze.
The ability to share all of our data sync, model, tests etc between monodroid and monotouch and wm7 is seriously cool (with some working - this is easilly possible - you'll need to manage multiple prj files).
Be careful to manage calls to wcf services correctly, keep them to a minimum, keep the archetecture simple. We ended up with a fairly complex dto to minimise the amount of calls to the wcf services to sync the data - this was well worth it as the time needed to sync a device from scratch is now a fraction of what it was.
Using SSL to communicate with the server is a PITA but I think that's more a case of the way apple have managed it.
You need to be a bit more explicit on your requirements. If you need only object serialization (dehydration/hydration) over REST API, then anything that supports POX or JSON will work just fine for you. However, if you need RPC-style method invocation, with authentication, encryption/digital signature, transactions, etc, then you need one of those frameworks you listed above.
If you need a framework, I personally would lean towards the MonoTouch WCF, as it gives you the ability to move your client to other platforms later as well (Windows Phone 7 for example). Then again, as you said, it's a bit rough right now, and if Mono team decides in the future that they don't have the resources to invest in maintaining it, you might end up with having to move to another framework. Of course, there's also the drawback that you need to use MonoTouch for your application, and can't use Objective-C. Granted, with the recent changes in the iOS Developer Agreement, that's not that much of an issue, but it is still something to keep in mind.
(Disclaimer: I used to work on Microsoft's WCF team, so I am biased towards the product itself)
The other option I would go for, would be Cocoa Distributed Objects. However, that would be my choice if the server is also running on OS X. I know there's Bonjour for Windows, but I doubt it's optimized for server scenarios, and I also don't know how rich is Apple's RPC implementation on top of it for the Windows platform. So I would stay with Apple's technology only if I am building exclusively for Apple's platform.
Note that WCF and Distributed Objects would give you RPC-style functionality, but they won't help you with any particular scenarios. If you need/want even higher level of abstraction, for example you need presence information or multi-user chat, you will still need to implement those yourself. It might be worth at this point to look at frameworks that provide those features for you. An example would be RakNet (which you listed above), which abstracts the remoting level and builds additional features on top of it.
You can use JSON Touch + Vitche PHP Emission Framework which provides all server-side you need. Also you can use that Framework to access existing SOAP (WCF, Axis, etc) services.
You can use Google protocol buffers to implement RPC though you will need to do some network programming for transporting your messages anyway. It supports interface generation for C++, Java, Python and Objective-C and .NET so you can create a single set of RPC messages and get code for working with them for almost any mobile platform. Transport layer on your mobile platforms you will have to implement yourself.
http:// code.google.com/apis/protocolbuffers/ - main Protobuf page (C++, Java, Python)
http:// code.google.com/p/protobuf-net/ - Protobuf .NET mentioned in one of the comments
http:// code.google.com/p/metasyntactic/wiki/ProtocolBuffers - Protobuf for Obj-C
I want to try developing an XMPP server component using XEP-0114: Jabber Component Protocol.
Which server do you recommend and why? I'm talking about ease of development, community support, documentation, examples, etc.
That's a hard question to answer, because I doubt there are many developers involved in developing across multiple XMPP projects and languages.
I can throw out a few personal perceptions but... I could be off-base!
What you're really looking for is which libraries would be recommended for component development. All the servers support the component protocol, so all you really need is a socket connection to the server and some helper routines to make the repetitive stuff like message parsing easier.
Where the server might matter is if you need tighter integration.
For example if you want your component to scale the same way as Ejabberd then you'll probably want to use exmpp.
If you need to deploy your component alongside Openfire into Java only enterprises, then you'll probably want to use smack.
If you are familiar with Python and want to prototype quickly use Wokkel.
I don't think documentation is going to be great for any of the libraries (haven't looked at them all though!) but that shouldn't be a huge burden. All you really need a good book on how the XMPP protocol works and then some sample code from the library and it's fairly easy to move on from there.
For an easy-to-use testing server I like openfire. Good instructions, easy to hook in components, and a good web interface for administration. Debugging is more of a "tail -f" on the logfiles, slightly java-ish.
I've used XCP professionally, but that's really for commercial use. It works well but if that's not your target deployment it's not worth the effort. I'm not sure if you can buy it separately any more.
I tried using ejabberd and I gave up quickly. I found the documentation for setup and administration awful. The config files are not self describing and there's no good walk through on the ejabberd site. It may be able to even fry my eggs in the morning for breakfast, but I couldn't get past install with the time I'd allotted to it.
For Openfire, there is something called Whack, which is a Java library for creating server components (XEP-0114).
Since the communication is over sockets, I presume the same code should work for any well designed XMPP server (such as ejabberd). However, I have only tested it with Openfire and it works quite well.