Resizing image generated by PaintCode app - swift

I have imported a vector image to PaintCode app and then export its Swift to code. I want to use this vector image in a small View (30x30) but since I want it to work on different devices, I need it to be size-independent.
The original size of the vector image is 512x512. When I add its class to a UIView, only a very small part of the vector image can be seen:
I need to somehow resize the image that can be fit in any size of a frame. I read somewhere, that I have to draw a frame in PaintCode app around the image, I did it but nothing changed.

Start by selecting the "Frame" option from the toolbar
Apply the frame to you canvas...
nb: If you mess up the frame DELETE IT and start again, modifying the frame can change the underlying vector, which is annoying
Apply the desired resize options. This can be confusing the first time.
I group all the elements into a single group. Select the group and on the "box" next to the coordinates of the group, change all the lines to "wiggly" lines. This allows paint code the greatest amount of flexibility when resizing the image...
Finally, change the export options. I tend to use both "Drawing" and "Image" as it provides me the greatest amount of flexibility during development
You should also look at Resizing Constraints, Resizing Drawing Methods and PaintCode Power User: Frames for more details

Related

How to scale up image where objects's size are remained same?

If you have a file which include objects for example for EE like transistors, resistors etc and if you group them into one and then from the corner drag it to zoom in a bigger figure.
How can I make sure that these components are not zoom in only wiring changes?
The problem is that I have like 30 images with different sizes and I'm placing them in a table with many images side by side. However, if I keep the same scale then some images looks small compared to other. So I tried to scale them to get the same size. However, this make the components's sizes are also scaled up with different scale factors.
Here is an example of circuit using the bult-in shapes in Visio. As you can see the components'sizes got bigger when I scaled up the object. This is usually desired. However, in my specific case I want to keep the component's size same.
Here is the Visio file or I think you can also use any available components in Visio.
https://file.io/VRUCR8yVgYxs

Visio shape drop size

I have custom stencils, and the shapes are all sized relative to each other. I want to know if there is a way to change the size that they drop onto the page? So if I set the multiplier to 1.5 then every shape will drop onto the page 1.5x larger than the master shape specifies. I don't want to have to resize later.
you could use the ondrop cell of the shapesheet.
setf(getref(width), width.thepage!prop.scale)+setf(getref(height), height.thepage!prop.scale)
I wonder however if you wouldn't go better by modifiying the scale of your page directly?

MATLAB: How do I resize (connected) components in a 3D binary image sequence without changing the dimensions of the sequence?

I'd like to resize the components contained in a 3D binary image sequence without changing any of the dimensions of the sequence itself.
I'm not sure if I need to do it on a component-by-component basis, if yes, then how do I create a transform such that the resized components are re-positioned 'correctly' in the image sequence? By 'correctly', I mean with the same centre of mass as the original unprocessed components.
(If that last paragraph doesn't make sense then please ignore)
A 2D example: suppose I wanted to enlarge by 10% the white blobs in the following [295x445] image
How would you do this without making the image itself larger?
you could use the imdilate function to dilate the regions of interest. The examples in the webpage show how to use this function.

Creating a stroke/outline of a .PNG with CALayer?

I would like to apply a "stroke" or outline to a png, identically to how Photoshop does it. I have a feeling this can be done with CALayer, but after some tinkering, it is not immediately obvious. setBorderWidth + setBorderColor is almost what I want, except that it only adds a border to the entire dimension of the image, rather than the outline of the png image itself.
Once the stroke is applied, I'd like to also knockout the fill of the png, leaving only an outlined border of the initial shape.
There is no automatic way to do what you're asking. You have to know the path of the shape within your png that you want to "knockout". Once you've defined that, you can create a CAShapeLayer, which accepts a CGPathRef, containing your points. You can stroke and fill the path layer with whatever color you choose and then add it to the layer hierarchy of the displaying view or use it to define a mask of one of the layers in your view.

iPhone: How to Determine Average Light/Dark of an Area of an UIImage

I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync.
I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values.
Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this?
I've thought of two possible approaches:
Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity?
Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure.
I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.
What about simply outlining your text in a way that it will show on both dark and light backgrounds?
This is how it is handled in other situations where text must be displayed over a background with unknown content (for example, films with subtitles).