Get size of repeated pattern from UIColor? - iphone

I can query if a UIColor is a pattern by inspecting the CGColor instance it wraps, the CGColorGetPattern() function returns the pattern if it exist, or null if it is not a pattern color.
CGPatternCreate() method requires a bounds when creating a pattern, this value defines the size of the pattern tile (Known as cell in Quartz parlance).
How would I go about to retrieve this pattern size from a UIColor, or the backing CGPattern once it has been created?

If your application is intended for internal distribution only, then you can use a private API. If you look at the functions defined in the CoreGraphics framework, you will see that there is a bunch of functions and among them one called CGPatternGetBounds:
otool -tV /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS4.3.sdk/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics | egrep "^_CGPattern"
You just have to make some function lookup on the framework and use it through a function pointer.
The header to include:
#include <dlfcn.h>
The function pointer:
typedef CGRect (*CGPatternGetBounds)(CGPatternRef pattern);
The code to retrieve the function:
void *handle = dlopen("/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics", RTLD_NOW);
CGPatternGetBounds getBounds = (CGPatternGetBounds) dlsym(handle, "CGPatternGetBounds");
The code to retrieve the bounds:
UIColor *uicolor = [UIColor groupTableViewBackgroundColor]; // Select a pattern color
CGColorRef color = [uicolor CGColor];
CGPatternRef pattern = CGColorGetPattern(color);
CGRect bounds = getBounds (pattern); // This result is a CGRect(0, 0, 84, 1)

I dont think its possible to get the bounds from the CGPatternRef, if thats what you're asking

There does not appear to be any way to directly retrieve any information from the CGPatternRef.
If you must do this, probably the only way (besides poking at the private contents of the CGPattern struct, which probably counts as "using private APIs") is to render the pattern to a sufficiently large image buffer and then detect the repeating subunit. Finding repeating patterns/images in images may be a good starting point for that.

Making your own color class that stores the bounds and let you access them through a property might be a viable solution.
Extracting the pattern bounds from UIColor doesn't seem to be possible.

Related

AnyLogic: is it possible to embed code to define a color?

In the Java source code of my class, I saw that color is integrated as a method call with the input color as a word. So it seems that you can't implement any code, because color is defined with shape.setFillColor().
Is there a way around this? Otherwise a more complicated coloring would not be possible.
I have attached an example here in this screenshot how I could imagine it, however I get some errors because semicolons are missing and I have not declared the variables. So my problem is that i don't know how and if i can query if-else as well as embed switch case. So how can I write various functions for a color setting?
Can someone help me?
Example code for a color change of a rectangle
You could define a function that returns Color type, e.g. chooseColor(<your parameters>) and you could call shape.setFillColor(chooseColor(<your parameters>)). The body of the chooseColor() function can be based on the example code you posted.
Or you could call the chooseColor() function directly at the Fill color property of your shape if it is not too computation heavy, otherwise it could slow down the simulation.
Not quite sure what you need, tbh. You can make coloring as complex as you like. But once you decide on a color, you must call shape.setFillColor(theColorYouChose) to actually do it...
You can, however, create any possible color using Color myNewColor = new Color(x,y,z,a) where x, y and z are RGB values (0-255) and the last is the transparency you want (also 0-255)
To use this with switch/if statements, you can create a local Color variable and use it at the end for the shape:
Color myColor = new Color(0,0,0,0);
switch(someThing) {
case myCase:
myColor = yellowGreen;
break;
case myOtherCase:
if (someCondition) {
myColor = new Color(12, 222,45,122);
} else {
myColor = blue;
}
break;
}
shape.setFillColor(myColor);

Is there a more elegant way for multiple attributes of a UIImage?

Is there a way to create like an array of attributes which I want a UIImage to have? Because I often have to do this repetitive pattern (I would like to get rid of the 'userImage.layer' part if possible):
override func awakeFromNib() {
super.awakeFromNib()
userImage.contentMode = .scaleAspectFill
userImage.layer.borderWidth = 1
userImage.layer.masksToBounds = false
userImage.layer.borderColor = UIColor.red.cgColor
userImage.layer.cornerRadius = self.frame.height / 2
self.clipsToBounds = true
// Initialization code
}
The short answer is: No officially supported way of grouping those attributes in UIKit*.
Many people end up building a thin "styling layer" on top of UIKit. Some approaches I've seen, from more popular to less popular:
Create some kind mapping of "styles" to attributes, and manually apply the style to the UIImage at some point in the view/view controller lifecycle (awakeFromNib is a good spot).
Subclass the class you want to style and create some kind of custom initializer that takes in a "style". You'd only end up with 1 subclass. Not every UIKit class can be safely subclassed, so YMMV.
Subclass the classes you want to style and have each subclass map to different styles. You'd end up with many subclasses. Not every UIKit class can be safely subclassed, so YMMV.
* There's UIAppearance, but not every attribute or class supports it. You can usually look at the header files to figure that out.

Handdrawn Lines for Highcharts

How would you extend Highcharts to accomplish a "hand-drawn" effect (example: https://www.amcharts.com/demos/column-and-line-mix/?theme=chalk ).
Or can it be done using a library?
Implementing that in Highcharts requires some core code analyzing and wrapping/overwriting SVGRenderer methods.
Example
Column points are SVG rect's shapes:
https://github.com/highcharts/highcharts/blob/master/js/parts/ColumnSeries.js
translate function:
// Register shape type and arguments to be used in drawPoints
point.shapeType = 'rect';
drawPoints function:
point.graphic = graphic =
renderer[point.shapeType](shapeArgs)
.add(point.group || series.group);
SVGRenderer (https://github.com/highcharts/highcharts/blob/master/js/parts/SvgRenderer.js) contains rect method that can be wrapperd/overwritten so that it returns the complex path (handwritten effect) instead of simple rect tag.
Docs about wrapping/overwriting: https://www.highcharts.com/docs/extending-highcharts/extending-highcharts

vImageAlphaBlend crashes

I'm trying to alpha blend some layers: [CGImageRef] in the drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!) routine of my custom NSView. Until now I used CGContextDrawImage() for drawing those layers into the drawLayer context. While profiling I noticed CGContextDrawImage() needs 70% of the CPU time so I decided to try the Accelerate framework. I changed the code but it just crashes and I have no clue what the reason could be.
I'm creating those layers like this:
func addLayer() {
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB)
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
var layerContext = CGBitmapContextCreate(nil, UInt(canvasSize.width), UInt(canvasSize.height), 8, UInt(canvasSize.width * 4), colorSpace, bitmapInfo)
var newLayer = CGBitmapContextCreateImage(layerContext)
layers.append( newLayer )
}
My drawLayers routine looks like this:
override func drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!)
{
var ctxImageBuffer = vImage_Buffer(data:CGBitmapContextGetData(ctx),
height:CGBitmapContextGetHeight(ctx),
width:CGBitmapContextGetWidth(ctx),
rowBytes:CGBitmapContextGetBytesPerRow(ctx))
for imageLayer in layers
{
//CGContextDrawImage(ctx, CGRect(origin: frameOffset, size: canvasSize), imageLayer)
var inProvider:CGDataProviderRef = CGImageGetDataProvider(imageLayer)
var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
var buffer:vImage_Buffer = vImage_Buffer(data: &inBitmapData, height:
CGImageGetHeight(imageLayer), width: CGImageGetWidth(imageLayer), rowBytes:
CGImageGetBytesPerRow(imageLayer))
vImageAlphaBlend_ARGB8888(&buffer, &ctxImageBuffer, &ctxImageBuffer, 0)
}
}
the canvasSize is allways the same and also all the layers have the same size, so I don't understand why the last line crashes.
Also I don't see how to use the new convenience functions to create vImageBuffers directly from CGLayerRefs. That's why I do it the complicated way.
Any help appreciated.
EDIT
inBitmapData indeed holds pixel data that reflect the background color I set. However the debugger can not po &inBitmapData and fails with this message:
error: reference to 'CFData' not used to initialize a inout parameter &inBitmapData
So I looked for a way to get the pointer to inBitmapData. That is what I came up with:
var bitmapPtr: UnsafeMutablePointer<CFDataRef> = UnsafeMutablePointer<CFDataRef>.alloc(1)
bitmapPtr.initialize(inBitmapData)
I also had to change the way to point at my data for both buffers that i need for the alpha blend input. Now it's not crashing anymore and luckily the speed boost is inspectable with a profiler (vImageAlphaBlend only takes about a third of CGContextDrawImage), but unfortunately the image results in a transparent image with pixel failures instead of the white image background.
So far I don't get any runtime errors anymore but since the result is not as expected I fear that I still don't use the alpha blend function correctly.
vImage_Buffer.data should point to the CFData data (pixel data), not the CFDataRef.
Also, not all images store their data as four channel, 8-bit per channel data. If it turns out to be three channel or RGBA or monochrome, you may get more crashing or funny colors. Also, you have assumed that the raw image data is not premultiplied, which may not be a safe assumption.
You are better off using vImageBuffer_initWithCGImage so that you can guarantee the format and colorspace of the raw image data. A more specific question about that function might help us resolve your confusion about it.
Some CG calls fall back on vImage to do the work. Rewriting your code in this way might be unprofitable in such cases. Usually the right thing to do first is to look carefully at the backtraces in the CG call to try to understand why you are causing so much work for it. Often the answer is colorspace conversion. I would look carefully at the CGBitmapInfo and colorspace of the drawing surface and your images and see if there wasn't something I could do to get those to match up a bit better.
IIRC, CALayerRefs usually have their data in non cacheable storage for better GPU access. That could cause problems for the CPU. If the data is in a CALayerRef I would use CA to do the compositing. Also, I thought that CALayers are nearly always BGRA 8-bit premultiplied. If you are not going to use CA to do the compositing, then the right vImage function is probably vImagePremultipliedAlphaBlend_RGBA/BGRA8888.

CoreGraphics prevents frame modifications on other objects?

Just getting started with iOS development, so please forgive my ignorance here. I've also searched for a while without success on this topic, but I'm sure I'm just not searching the right terms.
If I comment out the only line in this first for loop, the next for loop seems to function exactly how I expect. If I leave them both in then, I only see the CG stuff happening and the other objects sit still.
What does the transformation on the object currentGear have to do with the frame being changed on another object within the same view? Why would performing the transformation invalidate the frame change after it?
for (UIImageView *currentGear in self.imageGearCollection)
{
currentGear.transform = CGAffineTransformRotate(currentGear.transform, (90*M_PI)/180);
}
for (UIButton *currentCrate in self.buttonCrateCollection)
{
CGRect rectFrame = currentCrate.frame;
rectFrame.origin.x += 10;
currentCrate.frame = rectFrame;
}
Your question doesn't really have anything to do with Core Graphics.
The UIView Class Reference says this about frame:
Warning: If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
So what you're doing is not really allowed.
Since you're just trying to move the view, not change its size, you can do that by modifying its center property instead:
CGPoint center = currentCrate.center;
center.x += 10;
currentCrate.center = center;