Swift: Size of UIImage when slicing enabled - swift

When I slice the image in the assets folder in Xcode, the size property of the UIImage returns a wrong value. Does anyone know how to get the real image size when the slicing is enabled? Does not matter if size will be in pixels or in points.
Or maybe there is a way to get the size of the image's rounded part so that I could add the cap insets in code?
This is how I slice the image in Xcode:
Or, maybe there is a way to find the size of the highlighted (darker) part of the image?
I would appreciate any ideas!

When "slicing" an image in assets, you don't need to set the cap insets -- it's done automatically.
If you really need to get the "slice" values, you can read them via the .capInsets property:
guard let img = UIImage(named: "sliced") else { return }
print("image size:", img.size)
print("caps insets:", img.capInsets)
My quick test - of a 200 x 60 "capsule" image with 20-pixel corner radius - using the default slicing outputs:
image size: (39.0, 60.0)
caps insets: UIEdgeInsets(top: 0.0, left: 19.0, bottom: 0.0, right: 19.0)
So, the width is the left-inset plus the right-inset plus 1 (one) ... Because the center portion will stretch to fit, the "default* width of that section is 1.

Related

Draw images on a canvas next to each other without space in flutter

I'm creating a game in flutter. I'm using a tilesheet to render tiles. I chop the image in 50x50 sections and then render them depending on their surrounding.
The tiles are rendered at the position of their corresponding "game object". In order to keep the code clean from details about converting from the game world positions and sizes to actual screen sizes my painting classes always obtain world space and then scales it up to screen space.
However, after scaling up the tiles I'm sometimes left with gaps between the tiles.
I've tried:
Placing the tiles at their appropriate screen position, without scaling the canvas first, in case the scaling produces the problem.
Adding borders around the tiles in the tilesheet in case the canvas.drawImageRect samples too many pixels.
Making the board take a size divisible by the number of tiles, in case there is a rounding error.
Drawing a subsection of the tile.
I can't seem to make the gap disappear. Do you have any suggestions?
Below is the relevant drawing code, where size and position are in world space and frameX and frameY is the positions to extract from the spritesheet.
void drawFrameAt(int x, int y, Canvas canvas, Offset position, Size size) {
var frameX = x * frameWidth;
var frameY = y * frameHeight;
var rect = Rect.fromLTWH(frameX, frameY, frameWidth, frameHeight);
var width = size.width;
var height = size.height;
var destination = Rect.fromLTWH(position.dx, position.dy, width, height);
canvas.drawImageRect(image, rect, destination, Paint());
}

How to scale SCNNodes to fit in a box?

I have multiple collada files with objects (humans) of various sizes, created from different 3D program sources. I desire to scale the objects so they fit inside frame or box. From my reading, I cant using the bounding box to scale the node, so what feature do you utilize to scale the nodes, relative to each other?
// humanNode = {...get node, which is some unknown size }
let (minBound, maxBound) = humanNode.boundingBox
let blockNode = SCNNode(geometry: SCNBox(width: 10, height: 10, length: 10, chamferRadius: 0))
// calculate scale factor so it fits inside of box without having known its size before hand.
s = { ...some method to calculate the scale to fit the humanNode into the box }
humanNode.scale = SCNVector3Make(s, s, s)
How get its size relative to the literal box I want to put it in and scale it?
Is it possible to draw the node off screen to measure its size?

Resize an image for Histogram of Gradient

I have an image in size of 150 pixel in height and 188 pixel in width. I'm going to calculate HOG on this image. As this descriptor needs the size of detector window(image) to be 64x128, Should I resize my image to 64x128 and then use this descriptor? like this :
Image<Gray, Byte> GrayFrame = new Image<Gray, Byte>(_TrainImg.Size);
GrayFrame = GrayFrame.Resize(64, 128, INTER.CV_INTER_LINEAR);
I concern resizing may change the original gradients orientation depending on how it is resized since we are ignoring the aspect ratio of the image?
By the way, The image is croped and I can't crop it anymore. It means this is the size of image after cropping and this is my final bounding box.
Unfortunately the openCV HoGDescriptor documentation is missing.
In openCV you can change the values for detection window, cell size, blockStride and block size.
cv::HOGDescriptor hogDesc;
hogDesc.blockSize = cv::Size(2*binSize, 2*binSize);
hogDesc.blockStride = cv::Size(binSize, binSize);
hogDesc.cellSize = cv::Size(binSize, binSize);
hogDesc.winSize = cv::Size(imgWidth, imgHeight);
Then extract features using
std::vector<float> descriptors;
std::vector<cv::Point> locations;
hogDesc.compute(img, descriptors, cv::Size(0,0), cv::Size(0,0), locations);
Note:
I guess, that the winSize has to be divisible by the blockSize and the blockSize by the cellSize.
The size of the features is dependent on all these variables, so ensure to use images of same size and do not change the settings to not run into trouble.

Changing this to a set pixel width

This is probably super easy, but I can't get it to work right. I am trying to do a simple update to my iOS app by changing an image resize option to a set pixel size. the code below is what i'm working with and how it is currently functioning:
selectedWidth = (30 * selectedWidth)/100;
selectedHeight = (30 * selectedHeight)/100;
I assume this is 30% of 100%? I just need the selected width to be 200px, and height to be automatic. Can somebody please help me? I'm a newb with this type of coding, my developer is away and i thought this would be an easy change haha.
So if I understand you right, I think this might help
selectedHeight = (selectedHeight / selectedWidth) * 200;
selectedWidth = newWidth;
This should calculate the correct height to your 200px width.
If you want to change your fixed-width from time to time, you can define a constant variable in the m.file below the imports:
#define kWidth 200
and use it instead of the hard coded 200 in the code on top.
Example:
old width: 100px
old height: 50px
new width: 200px
new height: (50 / 100) * 200 = 100px

Trim alpha from CGImage

I need to get a trimmed CGImage. I have an image which has empty space (alpha = 0) around some colors and need to trim it to get the size of only the visible colors.
Thanks.
There's three ways of doing this :
1) Use photoshop (or image editor of choice) to edit the image - I assume you can't do this, it's too obvious an answer!
2) Ignore it - why not just ignore it, draw the image it's full size? It's transparent so the user will never notice.
3) Write some code that goes through each pixel in the image until it gets to one that has an alpha value > 0. This should give you the number of rows to trim from the top. However, this will slow down your UI so you might want to do it on a background thread.
e.g.
// To get the number of transparent rows at the top of the image
// Sorry this code is so ugly
uint32 *p = start_of_image;
while ( 0 == *p & 0x000000ff && p < end_of_image_data) ++p;
uint number_of_white_rows_at_top = (p - start_of_image) / width_of_image;
When you know the amount of transparent space from around the image you can draw it using a UIImageView, set the renderMode to center and let it do the trimming for you :)