What is the maximum polycount unity can handle - unity3d

What is the maximum polycount unity can handle, I am building a furniture-based application in unity, my client wants high detailed 3D models. What is the maximum unity can handle, are there any best practices to keep the quality of assets and not affect the performance of the application. I am building the application for the following platforms, OSX, Windows, Android and iOS.

There isn't a hard techincal limit, other than 65k verticies per single mesh (if using default, 16bit indexing), but I don't think there's a limit on mesh count, and if you are reusing meshes, you can draw them using Graphics.DrawMeshInstanced, or if you fancy some buffer work, Graphics.DrawMeshInstancedIndirect.
With relatively modern graphics card, enough ram etc, you can easily work in the million range. You can push if further (tens of millions? hundreds of millions?) but at some point, ineviteably performance will go down.
So basically the answer is : any number that will fit in your RAM, with a secondary constraint dependeind on your hardware if you want it realtime (unity will render any amount of polygons if it doesn't run out of memory, but ludicrous amount of poly might take multiple seconds to render)

If you want detailed objects and high performance, you can use level of detail, you can find more information here: https://docs.unity3d.com/560/Documentation/Manual/LevelOfDetail.html#:~:text=and%20OpenGL%20Core-,Level%20of%20Detail%20(LOD),on%20it%20is%20greatly%20reduced.&text=An%20optimisation%20technique%20called%20Level,its%20distance%20from%20camera%20increases.
For Android devices, it's recommendable to maintain the poly count of the scene under 90k, in PC devices, this number can increase to 1 or 2 million.
But there are techniques like occlusion culling that increases the game performance.
See this link about it: https://docs.unity3d.com/Manual/OcclusionCulling.html

Related

How does OS combine discrete and integrated graphics cards?

I have a system that has both a discrete and integrated graphics cards (one Nvidia, and the other Intel).
To my surprise I found that I could hook up a monitor to each one individually.
Moreover, I could play a game in a window on the monitor attached to the discrete card, and drag it to the other monitor attached to the integrated card (albeit with a drop in performance).
I also noticed that in the first case, only the discrete card was busy, but once I moved the window, both cards were busy.
I realize this is probably not an optimal configuration, but it got me curious as to how the OS handles this situation? There must be some communication going on between the cards for this to work, such as one card doing the actual computation and the other outputting the result.
Does anyone have a any insight into this?
Typically when you have an intel card and a separate card, much like you have, the integrated is used as default. When the machine is rendering graphics at a certain threshold, the separate card will kick in and take over.

Change Farseer Physics Engine settings to improve performances

I have sucessfully used Farseer to develop one of my game using XNA. The game runs like a charm in Windows and WP7. I'm currently working on porting my game in IOS using Monotouch and Monogame. I've sucessfully compiled and used Farseer also over the first version of my game ported over Monotouch and Monogame. All works fine except for performances. The game runs much more slower on the iPhone device. I did some code investigations and it seems that the major bottleneck is Farseer. It become really slow when it has to manage more then 5-6 bodies with a single fixture attached (circle).
Reading the documentation I noticed that to tune Farseer I could change values from the Setting static class:
Performance
In order to get the most out of the engine, you should try and follow the guidelines below:
Enable sleeping
Sleeping enables you to have large number of bodies in the world. It can also increase the stability of the engine since small movements in a stack of bodies don’t spread in the stack. A sleeping body has little overhead, so enabling it is recommended.
Disable CCD
Continuous Collision Detection prevents tunneling but at the cost of performance. If you don’t have problems with tunneling, you should disable CCD all together.
Minimize the number of position and velocity iterations
A high number of iterations makes the engine more stable at the cost of performance. You should tweak the values to fit your game.
The bad news is that if I change any value of that class, nothing seems to happen. I tried to change values as follow:
EnableDiagnostics = false
VelocityIterations = 6
PositionIterations = 2
ContinuousPhysics = false
I tried also with lower values like VelocityIterations = 1 but nothing seems to change....
Anyone have already changed Settings class values to improve performances?
Ok,
I have managed this. The major bottlenek was not related to Farseer. Once I solved all performance issues related to my "bad code", tuning Farseer like described above works very well to gain 5-10% of performances.
My game is written in XNA, ported to iOS an Android with monotouch and mono4android sucessfully.

iPhone and Vertex Buffer Objects

I've just started playing around with opengl es on the iphone the past couple of weeks and i'm looking at refactoring some of my code to use Vertex Buffer Objects(VBO). Before I do though I would like to make sure it'll be worth it. The problem is that afaik the only reason you create VBO's is to shift a chunk of data onto the graphics card so that it doesn't need to be retrieved from system ram when it's used. The iPhone however does not have any dedicated ram that I'm aware of so i'm struggling to see why I would benefit at all from using VBO's. I have seen talk around the internet with conflicting opinions and apple certainly want dev's to use it so there's probably still a reason to use them but just wanted to see if anyone on SO had an opinion to add.
I saw no performance improvement on an iPhone 3G. I moved a bunch of stuff to VBOs, but eventually backed it out as it made it more difficult for me to pursue other performance gains. It's not the quick 25% performance increase that I was hoping for.
I've read somewhere that it can make a difference on the newer hardware (3GS), but I don't have references to back that up.
It depends. (sorry).
Rob didn't see an improvement for his setup, but here is an interesting post that did see a large improvement.
The main reason to existence of VBO's is the presence of static data on 3D models. The first bottleneck you encounter is the slowness of copying data to video memory (by using the unavailable glBegin/glEnd block or glVertexPointer, glBufferData and friends).
Let's imagine the old "flying toaster" screensaver. All toasts are static (changing only the position) - why waste resources copying them every frame from CPU's memory to GPU's? Copy it once with buffers and draw it with a single command. And, depending on how you do animations, even the animated toasters can be described in a static fashion.
My first 2D game I started without VBOs. When I changed to VBOs, no difference (like Rob). But, when I refactored to use more static buffers, FPS gone from 20 to 40. Since my goal was to reach 30, I was satisfied. I had some ideas to refactor even more, leaving everything static, but I don't have time now (game is on review, next one to come).

High latency in an iPhone mmorpg

Right now I'm trying to make a mmorpg for the iPhone. I have it set up so that the iPhone requests for the player positions several times a second. How it does this is that the client sends a request using asynchronous NSURLConnection to a php page that loads the positions from a mysql database and returns it in json. However, it takes about .5 seconds from when the positions are requested to when they actually get loaded. This seems really high, are there any obvious things that could cause this?
Also, this causes the player movement on the client to be really choppy too. Are there any algorithms or ways to reduce the choppiness of the player movement?
Start measuring how long the database query takes when you run it outside your iPhone.
Then measure how long it takes when you send the same http request from something other than your iPhone(It's e.g. a 10-15 line c# program to figure this out).
If none of the above show any sort of significant latency, the improvements need to be done on the iPhone side. Some things to look out for:
GPRS/3G has rather high latency
GPRS/3G has rather high bit error rates - meaning there's going to be quite a few dropped packets now and then which will cause tcp to retransmit and you'll experience even higher latency
HTTP has a lot of overhead.
JSON adds a lot of overhead.
Maybe you'll need to come up with a compact binary format for your messages, and drop HTTP in favor of a custom protocol - maybe even revert to UDP
The above points generally don't apply, but they do if you need to provide a smooth experience over high latency,low bandwidt, flaky connections.
At the very least, make sure you're not setting up a new TCP connection for every request. You need to use http keep-alive.
I don't have any specific info on player movement algorithms, but what is often used is some sort of movement prediction.
You know the direction the player is moving, you can derive the speed if it's not always constant - this means you can interpolate over time and guess his new position, adjust the on screen position while you're querying for the actual position, and adjust back to the actual position when you get the query response.
The trick is to always interpolate over time within certain boundaries. If your prediction was a bit off compared to what the query returned, don't immediatly snap the positon back to the real position. Do interpolation between the current position and the desired postion over a handful of frames.
On the server side you should be using some system that keep running and that keeps the database connection open all the time. Preferably it would also cache things instead of requesting them from database all the time.
Also, do not make a new HTTP request for every update. It would be best if you hadn't need to use HTTP at all, as it really isn't suitable for realtime comminunication.
GPRS typically has 600 ms ping time, 3G has 300 ms and HSPA has 100 ms. See which mode is being used. Notice that some devices (I don't know of iPhone) drop from HSPA to regular 3G for power-saving reasons whenever there is not enough traffic to justify the faster mode.
As for position, a rather common practice is to apply a linear prediction, i.e. make the character continue movement in current direction, at the current speed, even when no data from server is available yet.
Most importantly: benchmark/profile to see where the latencies are. Is it your server, the network connection or the application.
Loading the player positions that fast has downsides.
It hammers your server.
3G isn't really meant to support low-latency applications
So I don't see a mmorpg working without some necessary shortcuts at this time, e.g. extrapolating paths based on their velocity and position. Loading positions will not work as fast as you want, especially with a server based on PHP of all things.
Either way, when developing for a mobile platform you're going to have to make sacrifices in terms of features versus a fully-featured desktop implementation.
I might also reimplement some of the more critical stuff if not the whole server in a faster language, e.g. C++.

How the dynamics of a sports simulation game works?

I would like to create a baseball simulation game.
Are these sports management games based on luck? A management game entirely based on luck is not fair, but it cannot be too predictable either. How does the logic behind these games work?
It's all about probability and statistics. You set the chance of something happening based on some attributes you assign, and then the random factor comes in during play to make things less predictable and more fun. Generally you get a load of statistics from some external source, encode them into your game's database, and write a system that compares random numbers to these statistics to generate results that approximate the real-life observations that the stats were based on.
Oversimplified example: say your game has Babe Ruth who hits a home run 8.5% of the time, and some lesser guy who hits one 4% of the time. These are the attributes you test against. So for each pitch you simulate, pick a random number between 0 and 100%. If it's less than or equal to the attribute, the batter scores a home run, if it's greater than the attribute, they don't. After a few pitches you'll start to see Babe Ruth's quality show relative to the other guy as he will tend to hit over twice as many home runs.
In reality you'd have more than 1 attribute for this, depending on the kind of pitching for example. And the other player might get to choose which relief pitchers to use to try and exploit weaknesses in the batter's abilities. So the gameplay comes from the interplay between these various attributes, with you trying to maximise the chance that the attribute tests work in your favour.
PS. Apologies for any mistakes regarding baseball: I'm English so can't be expected to understand these things. ;)
As you have already figured out, the core component of such games is the match simulation engine. As Spence said so, you want that simulation to "look right" rather than to "be right".
I worked on a rugby game simulation some time ago and there's an approach that works quite well. Your match is a finite state machine. Each game phase is a state, has an outcome which translates to a phase transition or changes in game state (score, replacements, ...).
Add in a event/listener system to handle things that are not strictly related to the structure of the game you're simulating and you have a good structure (everytime something happens in your simulation, a foul for instance, fire an event; the listeners can be a comment-generation system or an AI responsible for teams' strategies).
You can start with a rough simulation engine that handles things at a team level using an average of your players' stats and then move on to something more detailed that's simulating things at a player level. I think that kind of iterative approach suits a game simulation very well because you want it to look right, and as soon as an element looks right you can stop iterating on it and work on another part of your system.
Random is of course part of the game because as you said so, you don't want games to be too predictable. A very simple thing to do is to have virtual dice rolls against a player and team statistics when they are performing a particular action (throwing the ball for instance).
Edit: I make the assumption that we're talking about management games like Hattrick, where you're managing a roster and simulating game results rather than 2D/3D graphical simulations.
Usually timing plus a randomness to make the game replayable EDIT To clarify I mean in terms of when the pitch comes at you, if it was exact you could learn to play it perfectly, you would need a small randomness around the exact time that you swing to make the game have some chance). AI has a big part in this if you do things like curve balls, add the ability to steal bases etc.
Getting games "right" isn't a factor of design or maths so much as a feel. You will try something, play it, and see if it was fun. If it isn't try different algorithms or gameplay until you get it right.
A simulation is very much about an imagined world in that you create classes that represent all aspects of an imagined world. You need to model the players, specify the game rules, and game dynamics.
http://cplus.about.com/b/2008/05/31/nathans-zombie-simulator-in-c.htm
Look here for agent based model: http://www.montemagno.com/projects.html
One great thing about creating your own game is that you get to decide how the game logic is going to work. If you want the game to have a high degree of luck you can design that in. If you don't want the game to have a high degree of luck then you can design it out.
It's your game, you get to make up the rules.
Are you talking about a baseball game you play or a game simulator? Baseball games can be arcade-like or fantasy sports like or a blend.
I was at Dynamix when Front Page Sports Baseball was made. It was stats-based, meaning that you could play out games and seasons using the stats of the various players. That meant licensing Major League data. It used stats to influence outcomes.
There was a regular mode and a "fast-sim" mode that could breeze through the games faster.
I think Kylotan has the right strategy. Baseball has stats for everything. Simulate a game to the most detailed level you can manage. Combine player stats to determine a percentage chance for every outcome. Use randomness to decide the outcome.
For instance: The chance of a hit is based on Batting Average, Pitcher's ERA, etc. The opposing team's feilding percent determines the chance an out becomes an error.
Every stat you display to the 'manager' when selecting lineups should have some effect on gameplay - otherwise the manager is making decisions based on misleading information.
you ought to check out franchise ball. there is a browsable demo.
http://promo.franchiseball.com