Is there a better way than using a switch statement to access different properties of a variable in swift? - swift

I have this code that uses a for loop to iterate through every pixel of an image and then based on a user choice changes the value of the red, green or blue channel.
I am using a switch statement to select the channel and the iterate through every pixel in a for loop for that pixels appropriate color property. It feels very cludgy but I can't think of a way to simplify it.
Is there another way to access the property so I can eliminate the switch statement and just have one for loop that select the right pixel color based on user choice?
func applyFilterTo(image:UIImage,forColor:String, withIntensity:Int) -> RGBAImage {
var myRGBA = RGBAImage(image:image)!
switch forColor {
case "RED":
for x in 0..<myRGBA.width {
for y in 0..<myRGBA.height {
let pixelIndex = y * myRGBA.width + x
var pixel = myRGBA.pixels[pixelIndex]
let newValue = Double(pixel.red) + (Double(withIntensity)/100)*Double(pixel.red)
myRGBA.pixels[pixelIndex].red = UInt8(max(0,min(255,newValue)))
}
}
...
}
the above repeats for other options but the only thing that changes is the property I choose (red, blue, green). Better way to do this?

A good way to do this is replacing switch statements with multiple dispatch, such as the Visitor Pattern. Depending on how many options you will have in the switch statement, the switch statement is the easiest to read and understand.

Related

my shader is ignoring my worldspace height

Im VERY new to shaders so bear with me. I have a mesh that I want to put a sand texture on below a worldspace position y of say 10 else it should be a grass texture. Apparantly it seems to be ignoring anything I put in and only selecting the grass texture. Something IS happening because my vert and tris count explodes with this function, compared to if I just return the same texture. I just dont see anything no matter what my sandStart value is
this is in my frag function:
if (input.positionWS.y < _SandStart) {
return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation;
} else {
return tex2D(_SandTex, input.uv) * mainLight.shadowAttenuation;
}
Is there also a way I can easily debug some of the values?
Please note that the OP figured out that their specific problem wasn't caused by the code in the question, but an error in their geometry function, this answer is only about the question "Is there a way to debug shader values" as this debugging method helped the OP find the problem
Debugging shader code can be quite a challenging task, depending on what it is you need to debug, and there are multiple approaches to it. Personally the approach I like best is using colours.
if we break it down there are three aspects in your code that could be faulty:
the value of input.positionWS.y
the if statement (input.positionWS.y < _SandStart)
Returning your texture return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation;
Lets walk down the list and test each individually.
checking if input.positionWS.y actually contains a value we expect it to contain. To do this we can set any of the RGB channels to its value, and just straight up returning that.
return float4(input.positionWS.y, 0, 0, 1);
Now if input.positionWS.y isn't a normalized value (a.k.a a value that ranges from 0 to 1) this is almost guaranteed to just return your texture as entirely red. To normalize it we divide the value by its max value, lets take max = 100 for the exmaple.
return float4(input.positionWS.y / 100, 0, 0, 1);
This should now make the texture full red at the top (where input.positionWS.y / 100 would be 1) and black at the bottom (where input.positionWS.y / 100 is zero), and a gradient from black to full red inbetween. (Note that since its a position in world space you may need to move the texture up/down to see the colour shift). If this doesn't happen, for example it always stays black or full red then your issue is most likely the input.positionWS.y.
The if statement. It could be that your statement (input.positionWS.y < _SandStart) always returns either true or false, meaning it'll never split. We can test this quite easily by commenting out the current return texture, and instead just return a flat colour like so:
if(input.positionWS.y < _SandStart)
{
return float4(1,0,0,1);
}
else
{
return float4(0,0,1,1);
}
if we tested the input.positionWS.y to be correct in step 1, and _SandStart is set correctly we should see the texture be divided in parts red (if true) and the other part blue (if false) (again since we're basing off world position we might need to change the material's height a bit to see it). If this division in colours doens't happen then the likely cause is that _SandStart isn't set properly, or to an incorrect value. (assuming this is a property you can inspect its value in the material editor)
if both of above steps yield the expected result then return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation; is possibly the culprit. To debug this we can return one of the textures without the if statement and shadowAttenuation, see if it applies the texture, and then return the other texture by changing which line is commented.
return tex2D(_MainTex, input.uv);
//return tex2D(_SandTex, input.uv);
If each of these textures gets applied properly seperately then it is unlikely that that was your cause, leaving either the shadowAttenutation (just add the multiplication to the above test) or something different altogether that isn't covered by the code in your question.
bonus round. If you got a shader property you want to debug you can actually do this from C# as well using the material.Get<type> function (the supported types can be found in the docs here, and include the array variants too, as well as both Get and Set). a small example:
Properties
{
_Foo ("Foo", Float) = 2
_Bar ("Bar", Color) = (1,1,1,1)
}
can be debugged from C# using
Material mat = getComponent<Material>();
Debug.LogFormat("_Foo value: {0}", mat.GetFloat("_Foo"); //prints 2
Debug.LogFormat("_Bar value: {0}", mat.GetFloat("_Bar"); //prints (1,1,1,1)

Unity is returning material color slightly wrong

I have this mini task in my game where you need to click trophies to change color of the wood on them. I have two arrays of colors, one is an array containing all possible colors and the other one contains four colors (the answer) as follows:
I've double checked that the colors are equal between the two arrays. For example the purple in Colors-array has exactly the same r, g, b & a values as the purple in the Right Order-array.
To check whether the trophies has correct color I just loop through them and grab their material color. Then I check that color against the Right Order-array but it's not quite working. For example when my first trophy is purple it should be correct, but it's not because for some reason Unity is returning slightly different material color than excepted:
Hope somebody knows why this is happening.
When you say, they are exactly same color, I assume you are referring rgb values from Color Inspector, which are not precise values.
Now I dont know what could be causing in different values of colors but
You can write an extension method to compare the values after rounding them to closest integer.
public static class Extensions
{
public static bool CompareRGB(this Color thisColor, Color otherColor)
{
return
Mathf.RoundToInt(thisColor.r * 255) == Mathf.RoundToInt(otherColor.r * 255) &&
Mathf.RoundToInt(thisColor.b * 255) == Mathf.RoundToInt(otherColor.b * 255) &&
Mathf.RoundToInt(thisColor.g * 255) == Mathf.RoundToInt(otherColor.g * 255);
}
}
usage:
Color red = Color.Red;
red.CompareRGB(Color.Red); // true;
red.CompareRGB(Color.Green); // false;
Hope this helps.
I would use a palette. This is simply an array of all the possible colors you use (sounds like you have this). Record, for each "trophy", the INDEX into this array, at the same time you assign the color to the renderer. Also, record the index for each "button", at the same time you assign the color to the renderer.
Then you can simply compare the palette index values (simple integers) to see if the color matches.

How to draw concave shape using Stencil test on Metal

This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance
You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).

How to add a set of sprites in a certain pattern

I'm making a game that has a player that goes up and down if you hold the screen. This is not the important part though.
What I need is to add ENEMIES, that come toward you.
I need to know how to add the ENEMIES in a couple of different patterns.
Like this:(LOOK AT THE COINS PATTERN, HOW CAN I ACHIEVE THIS?)
You could define a 2-dimensional array to indicate where a coin should be e.g.
var coinRow = [[Int]]()
coinRow.append([0,1,1,1,1,1,1,0]) // '0' means 'No coin here'
coinRow.append([1,1,1,1,1,1,1,1]) // '1' means 'put coin here'
coinRow.append([0,1,1,1,1,1,1,0])
Then treat each coin 'area' as a 3x8 grid so given a starting location of the bottom-left hand corner as (0,0), do the following:
let coinStart = CGPoint(0,0)
coinPos = coinStart
for row in 0...2 { // Iterate over all rows
for column in 0...7 { // and all columns
if coinRow[row][column] == 1 { // Should there be a coin here?
putCoin(at: coinPos) // yes - draw one
}
coinPos.x += coin.width + coinHorizontalSeparation // next coin location
}
coinPos.y += coin.height + coinVerticalSeparation // Position to next row
coinPos.x = coinStart.x // Reset position to start of row
}
You wouldn't actually start at (0,0), so set coinStart as required. If the groups of coins appear in a regular pattern, then you can calculate coinStart and make the code that generates a block of coins a function that you call, passing coinStart as a parameter.

World.QueryAABB giving incorrect results in libgdx

I'm trying to implement mouse selection for my game. When I QueryAABB it looks like it's treating objects much larger than they really are.
Here's what's going on in the image
The blue box is an actor containing a body that I'd like to select
The outline on the blue box is drawn by Box2DDebugRenderer
The mouse selects a region on the screen (white box), this is entirely graphical
The AABB is converted to meters and passed to QueryAABB
The callback was called for the blue box and turned it red
The green outline left behind is a separate body to check if my conversions were correct, this is not used for the actual selection process
It seems to be connected to my meter size, the larger it is, the more inaccurate the result is. At 1 meter = 1 pixel it works perfectly.
Meter conversions
val MetersToPixels = 160f
val PixelsToMeters = 1/MetersToPixels
def toMeters(n: Float) = n * PixelsToMeters
def toPixels(n: Float) = n * MetersToPixels
In the image I'm using MetersToPixels = 160f so the inaccuracy is more visible, but I really want MetersToPixels = 16f.
Relevant selection code
val x1 = selectPos.x
val y1 = selectPos.y
val x2 = getX
val y2 = getY + getHeight
val (l,r) =
if (x2 < x1)
(x2,x1)
else
(x1,x2)
val (b,t) =
if (y2 < y1)
(y2,y1)
else
(y1,y2)
world.QueryAABB(selectCallback, toMeters(l),toMeters(b), toMeters(r),toMeters(t))
This code is inside the act method of my CursorActor class. And selectPos represents the initial point where the use pressed down the left mouse button and getX and getY are Actor methods giving the current position. The next bit sorts them because they might be out of order. Then they are converted to meters because they are all in pixel units.
selectCallback: QueryCallback
override def reportFixture(fixture: Fixture): Boolean = {
fixture.getBody.getUserData match {
case selectable: Selectable =>
selected += selectable
true
case _ => true
}
}
Selectable is a trait that sets a boolean flag internally after the query which helps determines the color of the blue box. And selected is a mutable.HashSet[Selectable] defined inside of CursorActor.
Other things possibly worth noting
I'm new to libgdx and box2d.
The camera is scaled x2
My Box2DDebugRenderer uses the camera's combined matrix multiplied by MetersToPixels
From what I was able to gather, QueryAABB is naturally inaccurate for optimization. However, I've hit a roadblock with libgdx because it doesn't have any publicly visible function like b2testOverlap and from what I understand, there's no plan for there to be one any time soon.
I think my best solution would probably be to use jbox2d and pretend that libgdx's physics implementation doesn't exist.
Or as noone suggested I could add it to libgdx myself.
UPDATE
I decided to go with a simple solution of gathering the vertices from the fixture's shape and using com.badlogic.gdx.math.Intersector against the vertices of the selection. It works I guess. I may stop using QueryAABB all together if I decide to switch to using a sensor for the select box.