Get first number of type CGFloat - swift

I have following numbers in CGFloat
375.0
637.0
995.0
I need to get the first number in CGFloat data type. For example the result for #1 must be 3.0, for #2 must be 6.0 and #3 must be 9.0
I tried the following
let width:CGFloat = 375.0
// Convert Float To String
let widthInStringFormat = String(describing: width)
// Get First Character Of The String
let firstCharacter = widthInStringFormat.first
// Convert Character To String
let firstCharacterInStringFormat = String(describing: firstCharacter)
// Convert String To CGFloat
//let firstCharacterInFloat = (firstCharacter as NSString).floatValue
//let firstCharacterInFloat = CGFloat(firstCharacter)
//let firstCharacterInFloat = NumberFormatter().number(from: firstCharacter)
Nothing seems working here. Where am I going wrong?
Update
To answer #Martin R, find below my explanation
I am implementing a grid-view (like photos app) using UICollectionView. I want the cells to be resized based on screen size for iPhone/iPad, Portrait and Landscape. Basically I don't want fixed columns. I need more columns for larger screen sizes and lesser column for smaller screen sizes. I figured that perhaps I can decide based on screen width. For example if the screen width is 375.0 then display 3 columns, If somewhere around 600 then display 6 columns, if around 1000 then display 10 columns and so on with equal width. So what I came up with is a) decide columns based on first number of the screen size and then for width divide by actual screen width. For example for a screen width of 375.0 I will have a cell size of CGSize(width: screenWidth / totalColumn) and so on.

You said:
For example if the screen width is 375.0 then display 3 columns, If somewhere around 600 then display 6 columns, if around 1000 then display 10 columns and so on with equal width.
So what you really want is not the first digit of the width (which would
for example be 1 for width = 1024 instead of the desired 10)
but the width divided by 100 and rounded down to the next integral value:
let numColumns = (width / 100.0).rounded(.down)
Or, as an integer value:
let numColumns = Int(width / 100.0)

var floatNum:CGFloat = 764.34
var numberNeg = false
if floatNum < 0{
floatNum = -1.0 * floatNum
numberNeg = true
}
var intNum = Int(floatNum)
print(intNum) //764
while (intNum>9) {
intNum = Int(intNum/10)
}
floatNum = CGFloat(intNum)
if numberNeg {
floatNum = -1.0 * floatNum
}
print(intNum)//7
print(floatNum)//7.0
try this one ...I hope it'll work

Related

Calculating physical distance scrolled from scrollWheel NSEvent

Making my first Swift app for macOS. Learning as I go...
I'm trying to make an app that calculates the total physical distance scrolled from a scrollwheel NSEvent handler. I'm using this code to attach the event handler:
NSEvent.addGlobalMonitorForEvents(matching: NSEvent.EventTypeMask.scrollWheel, handler: self.scrollWheel);
I'm getting the absolute deltaY value in the event handler by using let deltaY = abs(event.scrollingDeltaY)
I'm guessing the deltaY is reported in points, but how do I translate it to a physical distance (in inch)?
I'm manually calculating the ppi for the user's device using this code:
let width = NSScreen.main?.frame.width ?? 0
let height = NSScreen.main?.frame.height ?? 0
let diagonal = sqrt(pow(width, 2) + pow(height, 2))
let pixelsPerInch = ((width / CGDisplayScreenSize(CGMainDisplayID()).width) * 25.4) * CGFloat(NSScreen.main?.backingScaleFactor ?? 0
I'm using this formula to calculate the physical distance scrolled (in inches).
let distance = deltaY * 1 / pixelsPerInch * 40
The formula is hardcoded and obviously wrong. The multiplication by 40 is an approximation, as I have no idea if I'm on the right track. I'm looking for the right formula.
Any ideas?
Thanks in advance!
I believe you're close
CGDisplayScreenSize().width gives you the physical width in mm.
CGDisplayBounds().width gives you the width number of pixels.
And 1 inch = 25.4 mm
So:
let pixelsPerInch = CGDisplayBounds().width / ( CGDisplayScreenSize().width / 25.4 ))
...and...
let distance = deltaY / pixelsPerInch

how to make horizontal padding in iOS danielgindi charts

I'm using the ios-charts library and I would like to add some horizontal padding to my line charts so that the line does not start immediately at the border of the graph.
This is my current chart:
but I would like the blue line to have some padding as shown below. The rest should remain as it is. The reference gray lines should still take the entire width as they currently do.
I found it. This "padding" is actually ruled by the chart.xAxis.axisMinimum and chart.xAxis.axisMaximum. Those values are automatically set to the data min x and max x.
So if I want a left padding I just have to set a chart.xAxis.axisMinimum
In my case, I want around 10% of the x values to be padded, so I calculate it as
// dates is an array of Date representing my x values
if let maxX = dates
.map(\.timeIntervalSince1970)
.max(),
let minX = dates
.map(\.timeIntervalSince1970)
.min() {
let spanX = maxX - minX
let padding = spanX * 0.1
let axisMinimum = minX - padding
// set the left padding
chart.xAxis.axisMinimum = axisMinimum
}

Manually write world file (jgw) from Leaflet.js map

I have the need to export georeferenced images from Leaflet.js on the client side. Exporting an image from Leaflet is not a problem as there are plenty of existing plugins for this, but I'd like to include a world file with the export so the resulting image can be read into GIS software. I have a working script fort his, but I can't seem to nail down the correct parameters for my world file such that the resulting georeferenced image is located exactly correctly.
Here's my current script
// map is a Leaflet map object
let bounds = map.getBounds(); // Leaflet LatLngBounds
let topLeft = bounds.getNorthWest();
let bottomRight = bounds.getSouthEast();
let width_deg = bottomRight.lng - topLeft.lng;
let height_deg = topLeft.lat - bottomRight.lat;
let width_px = $(map._container).width() // Width of the map in px
let height_px = $(map._container).height() // Height of the map in px
let scaleX = width_deg / width_px;
let scaleY = height_deg / height_px;
let jgwText = `${scaleX}
0
0
-${scaleY}
${topLeft.lng}
${topLeft.lat}`
This seems to work well at large scales (ie zoomed in to city-level or so), but at smaller scales there is some distortion along the y-axis. One thing I noticed is that all examples of world files I can find (and those produced from QGIS or ArcMap) all have the x-scale and y-scale parameters being exactly equal (oppositely signed). In my calculations, these terms are different unless you are sitting right on the equator.
Example world file produced from QGIS
0.08984380916303301 // x-scale (size of px in x direction)
0 // rotation parameter 1
0 // rotation parameter 2
-0.08984380916303301 // y-scale (size of px in y direction)
-130.8723208723141056 // x-coord of top left px
51.73651369984968085 // y-coord of top left px
Example world file produced from my calcs
0.021972656250000017
0
0
-0.015362443783773333
-130.91308593750003
51.781435604431195
Example of produced image using my calcs with correct state boundaries overlaid:
Does anyone have any idea what I'm doing wrong here?
Problem was solved by using EPSG:3857 for the worldfile, and ensuring the width and height of the map bounds was also measured in this coordinate system. I had tried using EPSG:3857 for the worldfile, but measured the width and height of the map bounds using Leaflet's L.map.distance() function. To solve the problem, I instead projected corner points of the map bounds to EPSG:3857 using L.CRS.EPSG3857.project(), the simply subtracted the X,Y values.
Corrected code is shown below, where map is a Leaflet map object (L.map)
// Get map bounds and corner points in 4326
let bounds = map.getBounds();
let topLeft = bounds.getNorthWest();
let bottomRight = bounds.getSouthEast();
let topRight = bounds.getNorthEast();
// get width and height in px of the map container
let width_px = $(map._container).width()
let height_px = $(map._container).height()
// project corner points to 3857
let topLeft_3857 = L.CRS.EPSG3857.project(topLeft)
let topRight_3857 = L.CRS.EPSG3857.project(topRight)
let bottomRight_3857 = L.CRS.EPSG3857.project(bottomRight)
// calculate width and height in meters using epsg:3857
let width_m = topRight_3857.x - topLeft_3857.x
let height_m = topRight_3857.y - bottomRight_3857.y
// calculate the scale in x and y directions in meters (this is the width and height of a single pixel in the output image)
let scaleX_m = width_m / width_px
let scaleY_m = height_m / height_px
// worldfiles need the CENTRE of the top left px, what we currently have is the TOPLEFT point of the px.
// Adjust by subtracting half a pixel width and height from the x,y
let topLeftCenterPxX = topLeft_3857.x - (scaleX / 2)
let topLeftCenterPxY = topLeft_3857.y - (scaleY / 2)
// format the text of the worldfile
let jgwText = `
${scaleX_m}
0
0
-${scaleY_m}
${topLeftCenterPxX}
${topLeftCenterPxY}
`
For anyone else with this problem, you'll know things are correct when your scale-x and scale-y values are exactly equal (but oppositely signed)!
Thanks #IvanSanchez for pointing me in the right direction :)

CIConvolution chain leads to integer overflow

I've been doing some work with Core Image's convolution filters and I've noticed that sufficiently long chains of convolutions lead to unexpected outputs that I suspect are the result of numerical overflow on the underlying integer, float, or half float type being used to hold the pixel data. This is especially unexpected because the documentation says that every convolution's output value is "clamped to the range between 0.0 and 1.0", so ever larger values should not accumulate over successive passes of the filter but that's exactly what seems to be happening.
I've got some sample code here that demonstrates this surprise behavior. You should be able to paste it as is into just about any Xcode project, set a breakpoint at the end of it, run it on the appropriate platform (I'm using an iPhone Xs, not a simulator), and then when the break occurs use Quick Looks to inspect the filter chain.
import CoreImage
import CoreImage.CIFilterBuiltins
// --------------------
// CREATE A WHITE IMAGE
// --------------------
// the desired size of the image
let size = CGSize(width: 300, height: 300)
// create a pixel buffer to use as input; every pixel is bgra(0,0,0,0) by default
var pixelBufferOut: CVPixelBuffer?
CVPixelBufferCreate(kCFAllocatorDefault, Int(size.width), Int(size.height), kCVPixelFormatType_32BGRA, nil, &pixelBufferOut)
let input = pixelBufferOut!
// create an image from the input
let image = CIImage(cvImageBuffer: input)
// create a color matrix filter that will turn every pixel white
// bgra(0,0,0,0) becomes bgra(1,1,1,1)
let matrixFilter = CIFilter.colorMatrix()
matrixFilter.biasVector = CIVector(string: "1 1 1 1")
// turn the image white
matrixFilter.inputImage = image
let whiteImage = matrixFilter.outputImage!
// the matrix filter sets the image's extent to infinity
// crop it back to original size so Quick Looks can display the image
let cropped = whiteImage.cropped(to: CGRect(origin: .zero, size: size))
// ------------------------------
// CONVOLVE THE IMAGE SEVEN TIMES
// ------------------------------
// create a 3x3 convolution filter with every weight set to 1
let convolutionFilter = CIFilter.convolution3X3()
convolutionFilter.weights = CIVector(string: "1 1 1 1 1 1 1 1 1")
// 1
convolutionFilter.inputImage = cropped
let convolved = convolutionFilter.outputImage!
// 2
convolutionFilter.inputImage = convolved
let convolvedx2 = convolutionFilter.outputImage!
// 3
convolutionFilter.inputImage = convolvedx2
let convolvedx3 = convolutionFilter.outputImage!
// 4
convolutionFilter.inputImage = convolvedx3
let convolvedx4 = convolutionFilter.outputImage!
// 5
convolutionFilter.inputImage = convolvedx4
let convolvedx5 = convolutionFilter.outputImage!
// 6
convolutionFilter.inputImage = convolvedx5
let convolvedx6 = convolutionFilter.outputImage!
// 7
convolutionFilter.inputImage = convolvedx6
let convolvedx7 = convolutionFilter.outputImage!
// <-- put a breakpoint here
// when you run the code you can hover over the variables
// to see what the image looks like at various stages through
// the filter chain; you will find that the image is still white
// up until the seventh convolution, at which point it turns black
Further evidence that this is an overflow issue is that if I use a CIContext to render the image to an output pixel buffer, I have the opportunity to set the actual numerical type used during the render via the CIContextOption.workingFormat option. On my platform the default value is CIFormat.RGBAh which means each color channel uses a 16 bit float. If instead I use CIFormat.RGBAf which uses full 32 bit floats this problem goes away because it takes a lot more to overflow 32 bits than it does 16.
Is my insight into what's going on here correct or am I totally off? Is the documentation about clamping wrong or is this a bug with the filters?
It seems the documentation is outdated. Maybe it comes from a time where Core Image used 8-bit unsigned byte texture formates by default on iOS because those are clamped between 0.0 and 1.0.
With the float-typed formates, the values aren't clamped anymore and are stored as returned by the kernel. And since you started with white (1.0) and applied 7 consecutive convolutions with unnormalized weights (1 instead of 1/9), you end up with values of 9^7 = 4,782,969 per channel, which is outside of 16-bit float range.
To avoid something like that, you should normalize your convolution weights so that they sum up to 1.0.
By the way: to create a white image of a certain size, simply do this:
let image = CIImage(color: .white).cropped(to: CGSize(width: 300, height: 300))
🙂

Smoothen Edges of Chroma Key -CoreImage

I'm using the following code to remove green background from an image.But the edges of the image has green tint to it and some pixels are damaged.How can i smoothen this and make the cut out perfect.
func chromaKeyFilter(fromHue: CGFloat, toHue: CGFloat) -> CIFilter?
{
// 1
let size = 64
var cubeRGB = [Float]()
// 2
for z in 0 ..< size {
let blue = CGFloat(z) / CGFloat(size-1)
for y in 0 ..< size {
let green = CGFloat(y) / CGFloat(size-1)
for x in 0 ..< size {
let red = CGFloat(x) / CGFloat(size-1)
// 3
let hue = getHue(red: red, green: green, blue: blue)
let alpha: CGFloat = (hue >= fromHue && hue <= toHue) ? 0: 1
// 4
cubeRGB.append(Float(red * alpha))
cubeRGB.append(Float(green * alpha))
cubeRGB.append(Float(blue * alpha))
cubeRGB.append(Float(alpha))
}
}
}
#IBAction func clicked(_ sender: Any) {
let a = URL(fileURLWithPath:"green.png")
let b = URL(fileURLWithPath:"back.jpg")
let image1 = CIImage(contentsOf: a)
let image2 = CIImage(contentsOf: b)
let chromaCIFilter = self.chromaKeyFilter(fromHue: 0.3, toHue: 0.4)
chromaCIFilter?.setValue(image1, forKey: kCIInputImageKey)
let sourceCIImageWithoutBackground = chromaCIFilter?.outputImage
/*let compositor = CIFilter(name:"CISourceOverCompositing")
compositor?.setValue(sourceCIImageWithoutBackground, forKey: kCIInputImageKey)
compositor?.setValue(image2, forKey: kCIInputBackgroundImageKey)
let compositedCIImage = compositor?.outputImage*/
var rep: NSCIImageRep = NSCIImageRep(ciImage: sourceCIImageWithoutBackground!)
var nsImage: NSImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
let url = URL(fileURLWithPath:"file.png")
nsImage.pngWrite(to: url)
super.viewDidLoad()
}
Input:
Output:
Update:
Update 2:
Update 3:
Professional tools for chroma keying usually include what's called a spill suppressor. A spill suppressor finds pixels that contain small amounts of the chroma key color and shifts the color in the opposite direction. So green pixels will move towards magenta. This reduces the green fringing you often see around keyed footage.
The pixels you call damaged are just pixels that had some level of the chroma color in them and are being picked up by your keyer function. Rather than choosing a hard 0 or 1, you might consider a function that returns a value between 0 and 1 based on the color of the pixel. For example, you could find the angular distance of the current pixel's hue to the fromHue and toHue and maybe do something like this:
// Get the distance from the edges of the range, and convert to be between 0 and 1
var distance: CGFloat
if (fromHue <= hue) && (hue <= toHue) {
distance = min(abs(hue - fromHue), abs(hue - toHue)) / ((toHue - fromHue) / 2.0)
} else {
distance = 0.0
}
distance = 1.0 - distance
let alpha = sin(.pi * distance - .pi / 2.0) * 0.5 + 0.5
That will give you a smooth variation from the edges of the range to the center of the range. (Note that I've left off dealing with the fact that hue wraps around at 360°. That's something you'll have to handle.) The graph of the falloff looks like this:
Another thing you can do is limit the keying to only affect pixels where the saturation is above some threshold and the value is above some threshold. For very dark and/or unsaturated colors, you probably don't want to key it out. I think that would help with the issues you're seeing with the model's jacket, for example.
My (live) keyer works like this (with the enhancements user1118321 describes) and using its analyser I quickly noticed this is most likely not a true green screen image. It's one of many fake ones where the green screen seems to have been replaced with a saturated monochrome green. Though this may look nice, it introduces artefacts where the keyed subject (with fringes of the originally used green) meets the monochrome green.
You can see a single green was used by looking at the histogram. Real green screen always have (in)visible shades of green. I was able to get a decent key but had to manually tweak some settings. With a NLE you can probably get much better results, but they will also be a lot more complex.
So to get back to your issue, your code probably works as it is now (update #3), you just have to use a proper real life green screen image.