Right now in my unity project I am drawing OpenGL lines similar to the way drawRay() works in Unity. I have been trying to make it so that once the lines are rendered/drawn they stop redrawing themselves so that I can save on performance.
Is there a way to only draw an OpenGL line once in the OnPostRender() function but have it continually stay drawn in the unity scene as if it were a static image now?
You can't do this
If you want things to persist, then they need to be drawn. You can't have your cake (visible polygons) and eat it too (not draw them).
The only way to save on performance is to batch things together into larger blocks of drawable polygons and draw them all at once in one go.
Related
I'm trying to rip the among us textures into Unity, but I'm having trouble with the spritesheet for crewmate animations like walking, venting, idle, etc.
Here's the spritesheet I'm talking about: https://github.com/Overload02/among-us-assets/blob/main/Players/Player-sharedassets0.assets-55.png
You can see it is not consistent at all, making a uniform square or rectangle cut impossible.
What I tried originally is just creating all the boxes manually, but this looks terrible.
How does one handle this elagently?
Maybe you could use each sprite as a seperate image instead of a spritesheet. then you could edit the edges of each one seperately.
In my 2D app, I create many gameObjects with UI image components (I work with new UI). These are simple colored circles. They are placed in different positions on screen. But when I play long enough, the count of these gameObjects becomes too high and it impacts performance.
I thought since these objects are just textures (I do not need them to be animated or something, just simple textures that stay on the same place in the screen all the time), maybe there is a way to merge them into one gameObject with Image component that would hold all these circles in one texture?
I tried to capture the camera view through RenderTexture ReadPixels method, but it captures the whole screen and all objects in it, while I need to merge only these circles and in a specific rectangle on the screen.
I am making a app which draws continuous lines like a snake using Unity and SKSpriteKit (Obj-C) in Xcode (I’m making 2 versions of the same app in both):
http://i.stack.imgur.com/qA1zk.png
http://i.stack.imgur.com/484kj.png
http:// i.stack.imgur.com/QTEkC.png (Apologies for the image posts. I can't post an image/more than 2 links)
If you’ve ever heard of a game called Curve Fever, what I’m doing here is quite similar to it. I’m controlling the direction of the end of the line with the arrowkeys, whilst the end of the line automatically moves forward every frame creating an image like the one above.
However, from the 3 screenshots above, it is quite obvious that my program isn’t very efficient - every frame, I add a circle sprite to the SKScene in the place where my moving sprite is, which is why, after a while, there are over 1000 nodes on the screen, and the energy impact/memory/cpu is very high… Not ideal.
So now I’m looking for better ways of drawing the line on the screen without drawing thousands of nodes.
A while ago, a friend talked to me about how he made a similar app in GameMaker (which I have no knowledge of how to use). When I asked him how he rendered the line, he said he created something called a “surface”, and when anything moved on that surface, the old position of the sprite would still stay there - which would create lines if a circle moved across the surface.
He was rather vague about this, and I tried to do some research later, but with no success. I couldn’t find anything relevant about continuous lines, surfaces and GameMaker, Xcode or Unity.
If someone could come up with a solution like my friend was talking about, for Xcode/Unity - preferably both (or if someone could tell me what he was talking about for GameMaker), then I’d be grateful, as this would optimise my game and reduce the severe lags I get after around 30 seconds.
Also, I’d be grateful if anyone could suggest alternative solutions to this, too.
I'm using GameMaker but I have no knowledge of Xcode or Unity. I can't help you directly but I can explain GameMaker surfaces.
Surfaces in GM are objects where you can draw on instead of directly drawing on the screen. Later you can draw the surface to the screen. The main advantage of it is that you can store a surface and for example draw it again in another tick, while the screen is redrawn in every tick, or that you can change it over time.
Surfaces are basically just bitmaps where you draw on. That means it wouldn't be hard to do the same in any other environment. Most other libraries/APIs call it canvas.
In your example you would draw one circle to the bitmap in each tick and then draw the whole bitmap to the screen.
A related topic is destructible terrain as it is discussed here: https://gamedev.stackexchange.com/questions/6721/implementing-a-2d-destructible-landscape-like-worms
I'm trying to build some simple game(learning) where you will have to clean objects.
That means for example you have a monitor with dust on it so you can't see the image. You then uses mouse or finger to move arround monitor to make image visible.
So what I've got so far is 2 images first being the monitor image and second being the dust before it. I've managed to realize when user swipped all image down. But what would I like to do is when user tap the screen somewhere only in that radius r the front image disappears.
I'm guessing I should be using layer mask, but I have no idea how to do it.
I basicly managed to write my own shader that only shows layer with correct image but no idea how to do that you can accualy get some cool effect from it.
Any ideas will be apriciated!
I would use render texture for this. It allows you to use image files as a brush. And it's quite fast. The idea is to use render texture as a mask. You start with black texture. Whenever player touches the screen you render your brush image to this texture. But this feature (render textures) is available only in Unity Pro.
You can still use texture as a mask if you don't want to buy Unity Pro. But this will be definitely slower than render textures. How much slower I don't know. The options are:
Create ordinary texture (Texture2D) and use SetPixel or SetPixels to update mask when player touches the screen. If you want to go this way, I would recommend to use much smaller texture than the screen size (4x, 8x depends on the quality you want to get). Otherwise it will be damn slow.
Create ordinary texture (Texture2D) and use ReadPixels. This works pretty much the same way as render textures but it's slower. This technique is explained here.
Using render texture (requires Unity Pro)
Ok. I've made an example: https://drive.google.com/folderview?id=0B60e_iFEZd1-RlB4LVN6NE84clU&usp=sharing
There are:
Image that player needs to clear from dust: Image.jpg
Image of dust that we draw on top of it: Dust.png
Image that we use as an erasing brush: Brush.png
Our render texture that we use as mask for dust: Mask.renderTexture
Shader that we use to update the mask: MaskConstruction.shader
Shader that we use to render dust (it combines Dust.png and render texture): Masked.shader
Script that brings everything to life: MaskCamera.cs
Main scene: Main.unity
I think this might be helpful. The Main objective of this Code sample is to create a dust/fog removing effect using Unity. Using, Texture Masking shader, Mask Construction Shader, Render Texture, Camera Masking Script and masking camera, you can create your own effect. Read on to know how and download the source code. It’s free!!
http://studio.openxcell.com/remove-dustfog-object-unity-swipe.html
I've been tasked with creating a sphere that can be rotated by touch (or animated) along one axis, like a regular globe. I should also be able to draw animated lines on this sphere (eg. draw a line between Sydney and New York). I usually do all my animations in 2D, typically using core animation as I've never really had a need to do anything else. I have a feeling that this sort of problem though requires me to jump into OpenGL.
My question is whether it would be possible to achieve this using core animation (time is of the essence), or if I do need to quickly learn OpenGL. If so, is this a fairly simple problem to solve? I'm a pretty good programmer, but I have no OpenGL experience. Would a capable programmer be able to do this in say 2 weeks?
As a further question, supposing I do use OpenGL, if I then need to do other things in the project (eg. show different screens, or show screens over the top of the sphere), am I able to use UIKit or does the entire project need to be in OpenGL?
Core Animation is for animating views, and basically a 2D animation layer - so it's a no-go for the 3D rotating sphere.
Drawing a textured sphere is rather easy, see this sample
Mixing GL and regular UIView's is not a problem. You can overlay regular controls over the GL view.