How to optimize the game's sound effects - unity3d

As various sound effects of the game were added, the sound effects naturally increased. However, I would like to ask you a problem that can solve this problem because the more music you add, the more you worry about optimizing the game.
I tried to reduce the sound effects of the game, but the sound effects were different, so I couldn't easily reduce them. If I know how to optimize, my game will go smoother.

Related

Particles or Animation

thank you so much for all the fast responses I truly appreciate it. I have another question I've been wondering about, so I am working mobile game and I want to keep the size as small as possible. So I wanted to know does using Unity particle effects, impact performance and game size? I was wondering, if i wanted to make a visual effect of breaking apart a GameObject when destroyed, should I use unity particle system to do so? or should I make an animation of that game object breaking and use that instead?
to summeize should I be making animation for things such as rain effects and confetti and fire effect instead of making them with the particle system? is there any drawback to using one over the other?
As TEEBQNE said it is your choice between animations and the particle system depending on what you need. If your team is small or you are a single developer it is best to opt for the particle system for the flexibility of changes and quick execution.
Optimising the particle system is not hard, just make sure you don't overdo the numbers and avoid some common mistakes. There is a list of the best practices for optimization on mobile games found here
You can also find some pretty good looking particles premade in the unity asset store, which are already optimized well.

Unity best practices for unloading unused areas of a map

I'm wondering about the best way of reducing my CPU and memory overhead for areas of the game world that do not need to be updated every frame.
I have just started to consider this issue as I'm currently implementing a shadow detection system using Raycasts.
My problem is this:
I can have about 100 lights in my level that on every frame send a Raycast to nearby characters to determine if these characters are in shadows.
My game is a low poly PC game, and I understand the overhead from Raycasting isn't that drastic. So its not a major concern. But I'm still not sure of the best approach to optimising this.
I have been thinking of a few soltions, but am unsure if there is "standard" per say.
1. exit update loop if player is too far away
void Update() {
if (Vector3.Distance(playerPos, transform.position) > someRadius) {
return
}
}
This is the most glaringly obvious solution, with even more obvious concerns.
This update loop will still be hogging CPU cycles , performing 2 calculations on every frame, for every light point.
2. Disable Light gameObjects when the player is too far away
This method is more efficient in terms of CPU overhead, as those will be negated. However I'm still hogging uneccesary memory.
In order to make this solution more scalable I would have to design some kind of "enabler" that keeps track of game objects that should be enabled/disabled based on the player position.
But at this point I know I'm re-inventing the wheel, and feel very sure that there is an industry standard for this.
Is there an alternative to enabling/disabling?
I see a lot of game developers talking about physically unloading areas of their game from memory and writing those areas to disk, when the player is not nearby.
I wonder is this achieved by simply destroying and re-instantiating the objects.
Question
Is Unity opionated about this?
The page here lists an example, similar to my first solution. But they are talking about 100s of thousands of updates per frame here.
Maybe I don't need to worry as much as I think
Thanks!
If you want not render objects, that are not in your camera area, Occlusion culling is what you exactly need. Watch this link:
https://docs.unity3d.com/Manual/OcclusionCulling.html
If You want change how your object looks based on range between camera and your object, You have to use LOD(Level Of Detail). Here is documentation:
https://docs.unity3d.com/Manual/class-LODGroup.html

Is there an easy way to do Per-Pixel collision detection on iPhone/UIKit?

I'm quickly prototyping an iPad game and have been using frame-based collision detection. It's very much needing per pixel collision detection. Is there an easy way to implement this or any guides I could look at to hacking together my own? Google only brings up people in my similar predicament which does not bode well.
Ok. We had a game where we needed this.
One solution which worked was doing glreadpixel. But on the 3G, after a point when we added more objects in the game it became a bottleneck. 3GS, iPad and new iPhone/iPod should perform a lot better. Remember to read more, as glReadPixel is very costly and blocking call. But experimenting won't hurt.
Later we chose to use our own collision maps with curves and lines to do terrain collision. Similar to Box2d.

Ball rolling sound effect

I am working on a Labyrinth style app for iPhone using Chipmunk and openAL. I got everything working except the ball rolling sound. What I have tried is playing a small sound for each update in the ball's position so that the overall effect sounds like the ball is rolling. Based on advice on this forum I tired using velocity of the ball to adjust pitch of the sound. I have the following problems:
I cant hear the sound at all when I am playing this sound in a chipmunk call back. I can hear it elsewhere.
Even if I got this working somehow, the sound I should play has to be very very short as the ball doenst take too long to roll. THere has to be a alternate way.
Can anybody please help? I can even pay for a simple application that did this if the sound is also included.
I recommend cheating... record (or find somewhere) some longish looping sounds of the ball rolling at different speeds. Have one of them playing, based on the speed of the ball. As the ball's speed changes, you can cross-fade from one sample to another. My guess is that that will sound more realistic than just varying the pitch of a single sample.
Of course, it may be enough just to have one longish looping sample, and only vary the volume proportional to the ball's speed. I'll have to go track down my labyrinth game and check. :)

Synthesizing a realistic bounce sound for maze tilt marble game

In considering the design of marble-in-maze games where you tilt the table to get the ball to the end of the maze without going down one of the holes, I wonder whether anyone here has considered the modelling of the sound of the ball hitting the walls...
The ball doesn't always make the same sound.
This other question covers the rolling sound:
Sound of a rolling ball
But I am more interested in the bouncing sound - I am often struck by how unrealistic it is in most people's version of the game.
What are the factors to consider to work out how to produce a realistic sound?
How must the sample or raw data then be processed or generated?
There are some good links in the Sound Modeling section of this page from a course at Carnegie Mellon: http://www-2.cs.cmu.edu/~djames/pbmis/index.html. The instructor, Doug James, is now at Cornell as does similar research there (http://www.cs.cornell.edu/projects/Sound/).
I've never tried to implement any of these methods, but I suspect that they're overkill and/or too slow for a small game. However, you might be able to generate several samples offline and choose an appropriate one at runtime.
Hope that helps.