ABAddressBook poor performance since iOS 6 - ios5

i have a App that imports round about 12000 numbers in 3 contacts and 1 group. Since iOS 6 this process needs 5 times longer than before with iOS 5.1. In the simulator it isn't as slow as on a device.
I tried some profiling and I found out that the main time is used by ABAddressBookSave with 28% of execution time. I also got the feeling that everything with Addressbook is slower than before. On iOS 5 its about 3% of the total execution time. Memory and CPU are ok.
Has anyone had similar problems, did anyone find out why this happens or discover a solution to fix it?
Here is a stackTrace. Why is it so slow in iOS 6?
Running Time Self Symbol Name
3212.0ms 41.6% 0,0 ABAddressBookSave
3212.0ms 41.6% 0,0 ABCSave
3212.0ms 41.6% 0,0 ABCAddressBookSaveWithConflictPolicy
3198.0ms 41.4% 0,0 CPRecordStoreSaveWithPreAndPostCallbacksAndTransactionType
3134.0ms 40.6% 0,0 CFDictionaryApplyFunction
3134.0ms 40.6% 0,0 CFBasicHashApply
3134.0ms 40.6% 0,0 __CFDictionaryApplyFunction_block_invoke_0
3134.0ms 40.6% 0,0 CPRecordStoreUpdateRecord
2971.0ms 38.5% 0,0 _didSave
2971.0ms 38.5% 0,0 ABCDContextUpdateSearchIndexForPersonAndProperties
2773.0ms 35.9% 0,0 CPSqliteStatementPerform
2773.0ms 35.9% 0,0 sqlite3_step
2773.0ms 35.9% 0,0 sqlite3VdbeExec
2772.0ms 35.9% 0,0 fts3UpdateMethod
2765.0ms 35.8% 0,0 fts3PendingTermsAdd
2734.0ms 35.4% 0,0 ABCFFTSTokenizerOpen
2734.0ms 35.4% 0,0 ABTokenListPopulateFromString
2631.0ms 34.1% 1,0 CFStringGetBytes
2630.0ms 34.1% 2624,0 __CFStringEncodeByteStream
6.0ms 0.0% 0,0 CFStringEncodingIsValidEncoding
And here the same method in iOS 5
Running Time Self Symbol Name
245.0ms 12.9% 0,0 ABAddressBookSave
245.0ms 12.9% 0,0 ABCSave
245.0ms 12.9% 0,0 ABCAddressBookSaveWithConflictPolicy
234.0ms 12.3% 0,0 CPRecordStoreSaveWithPreAndPostCallbacksAndTransactionType
167.0ms 8.8% 0,0 CFDictionaryApplyFunction
167.0ms 8.8% 0,0 CFBasicHashApply
167.0ms 8.8% 0,0 __CFDictionaryApplyFunction_block_invoke_0
167.0ms 8.8% 0,0 CPRecordStoreUpdateRecord
162.0ms 8.5% 0,0 CFDictionaryApplyFunction
162.0ms 8.5% 0,0 CFBasicHashApply
162.0ms 8.5% 0,0 __CFDictionaryApplyFunction_block_invoke_0
162.0ms 8.5% 0,0 CPRecordStoreSaveProperty
158.0ms 8.3% 0,0 ABCMultiValueSave
158.0ms 8.3% 1,0 ABCDBContextSaveMultiValue
143.0ms 7.5% 0,0 CPSqliteConnectionAddRecord
143.0ms 7.5% 1,0 CPSqliteConnectionAddRecordWithRowid
85.0ms 4.5% 0,0 CPSqliteStatementPerform
16.0ms 0.8% 2,0 CFRelease

Related

In android text size is in dp but in flutter is in pixel then how to match same in flutter

In android we define text size as dp but in flutter text size is in pixel then how to match same in flutter.
How to achieve same in flutter.
Any help is appreciated!
From the Android Developer Documentation:
px
> Pixels - corresponds to actual pixels on the screen.
in
> Inches - based on the physical size of the screen.
> 1 Inch = 2.54 centimeters
mm
> Millimeters - based on the physical size of the screen.
pt
> Points - 1/72 of an inch based on the physical size of the screen.
dp or dip
> Density-independent Pixels - an abstract unit that is based on the physical density of the screen. These units are relative to a 160
dpi screen, so one dp is one pixel on a 160 dpi screen. The ratio of dp-to-pixel will change with the screen density, but not necessarily in direct proportion. Note: The compiler accepts both "dip" and "dp", though "dp" is more consistent with "sp".
sp
> Scaleable Pixels OR scale-independent pixels - this is like the dp unit, but it is also scaled by the user's font size preference. It is recommended you
use this unit when specifying font sizes, so they will be adjusted
for both the screen density and the user's preference. Note, the Android documentation is inconsistent on what sp actually stands for, one doc says "scale-independent pixels", the other says "scaleable pixels".
From Understanding Density Independence In Android:
Density Bucket
Screen Density
Physical Size
Pixel Size
ldpi
120 dpi
0.5 x 0.5 in
0.5 in * 120 dpi = 60x60 px
mdpi
160 dpi
0.5 x 0.5 in
0.5 in * 160 dpi = 80x80 px
hdpi
240 dpi
0.5 x 0.5 in
0.5 in * 240 dpi = 120x120 px
xhdpi
320 dpi
0.5 x 0.5 in
0.5 in * 320 dpi = 160x160 px
xxhdpi
480 dpi
0.5 x 0.5 in
0.5 in * 480 dpi = 240x240 px
xxxhdpi
640 dpi
0.5 x 0.5 in
0.5 in * 640 dpi = 320x320 px
Unit
Description
Units Per Physical Inch
Density Independent?
Same Physical Size On Every Screen?
px
Pixels
Varies
No
No
in
Inches
1
Yes
Yes
mm
Millimeters
25.4
Yes
Yes
pt
Points
72
Yes
Yes
dp
Density Independent Pixels
~160
Yes
No
sp
Scale Independent Pixels
~160
Yes
No
More info can be also be found in the Google Design Documentation.

position of a point in circle's arc matlab

i have a circle and a point on it in matlab :
center = [Xc1 Yc1];
circle = [center 150];
point=[ 54.8355 116.6433]
I want to partition this circle into 8 arc and find out which arc is this point in ? how can i do this in matlab?
(i used this code to draw circle :
http://www.mathworks.com/matlabcentral/fileexchange/7844-geom2d/content/geom2d/geom2d/intersectLineCircle.m)
Dividing a circle into 8 arcs can be stated another way: cutting a pie into 8 pieces. These pie pieces each have an angle of 360/8 = 45 degrees. You can then think of the circle being broken up into these angle ranges (in degrees):
[0,45)
[45,90)
[90,135)
[135,180)
[180,225)
[225,270)
[270,315)
[315,0)
You'll have to then calculate the angle between the line that is made when you connect your point to the center of the circle and the x-axis. When you calculate this angle, you'll see which 'angle bin' it belongs to.

point projection into yx rotated plane

I want to simulate depth in a 2D space, If I have a point P1 I suppose that I need to project that given point P1 into a plane x axis rotated "theta" rads clockwise, to get P1'
It seems that P1'.x coord has to be the same as the P1.x and the P1'.y has to b shorter than P1.y. In a 3D world:
cosa = cos(theta)
sina = sin(theta)
P1'.x = P1.x
P1'.y = P1.y * cosa - P1.z * sina
P1'.z = P1.y * sina + P1.z * cosa
Is my P1.z = 0? I tried it and P1'.y = P1.y * cosa doesn't result as expected
Any response would be appreciated, Thanks!
EDIT: What I want, now I rotate camera and translate matrix
EDIT 2: an example of a single line with a start1 point and a end1 point (it's an horizontal line, result expected is a falling line to the "floor" as long as tilt angle increases)
I think it's a sign error or an offset needed (java canvas drawing (0,0) is at top-left), because my new line with a tilt of 0 is the one below of all and with a value of 90º the new line and the original one match
The calculation you are performing is correct if you would like to perform a rotation around the x axis clockwise. If you think of your line as a sheet of paper, a rotation of 0 degrees is you looking directly at the line.
For the example you have given the line is horizontal to the x axis. This will not change on rotation around the x axis (the line and the axis around which it is rotating are parallel to one another). As you rotate between 0 and 90 degrees the y co-ordinates of the line will decrease with P1.y*cos(theta) down to 0 at 90 degrees (think about the piece of paper we have been rotating around it's bottom edge, the x axis, at 90 degrees the paper is flat, and the y axis is perpendicular to the page, thus both edges of the page have the same y co-ordinate, both the side that is the "x-axis" and the opposite parallel side will have y=0).
Thus as you can see for your example this has worked correctly.
EDIT: The reason that multiplying by 90 degrees does not give an exactly zero answer is simply floating point rounding

what is resolution of photo taken by iPhone 4 camera?

In specs,
iPhone 4 screen resolution & pixel
density
* iPhone 4 has a screen resolution of 960×640 pixels, which is twice that of
the prior iPhone models
As we know, when we code like this,
CGImageRef screenImage = UIGetScreenImage();
CGRect fullRect = [[UIScreen mainScreen] applicationFrame];
CGImageRef saveCGImage = CGImageCreateWithImageInRect(screenImage, fullRect);
the saveCGImage will have size (320,480), my question is how about iPhone 4 ? Is that (640,960) ?
Another question is about black image in thumb view when you open Photo.app if coding like this,
CGImageRef screenImage =
UIGetScreenImage();
CGImageRef saveCGImage = CGImageCreateWithImageInRect(screenImage, CGRectMake(0,0,320,460)); // please note, I used 460 instead of 480
The problem is that when open "Photo.app", in the thumb view, those images are view as black, when clicking it to see details, that is okay.
Any solution for this issue now ?
Thanks for your time.
update questions:
When you invoke UIGetScreenImage() to capture screen in iPhone 4, is that also 320x480 ?
From Falk Lumo:
iPhone 4 main camera:
5.0 Mpixels (2592 x 1936)
1/3.2" back-illuminated CMOS sensor
4:3 aspect ratio
35 mm film camera crop factor: 7.64
Low ISO 80 (or better)
3.85 mm lens focal length
f/2.8 lens aperture
Autofocus: tap to focus
Equivalent 35mm film camera and lens:
30 mm f/22
Photo resolution taken by iOS devices,
iPhone 6/6+, iPhone 5/5S, iPhone 4S(8 MP) - 3264 x
2448 pixels
iPhone 4, iPad 3, iPodTouch(5 MP) - 2592 x 1936 pixels
iPhone 3GS(3.2 MP) - 2048 x 1536 pixels
iPhone 2G/3G(2 MP) - 1600 x 1200 pixels

programmatically trimming (automatically cropping out transparent boundaries) an image in objective-c / cocoa

Does anyone know how to trim an image (uiimage or cgimage). By trim I mean programatically cropping to the non-transparent bounds of an image. So if I have the image below:
00111000
00010000
01011110
00000000
it would yield:
011100
001000
101111
Sum all rows and all columns of your image. You'll get two arrays, in your example looking like this:
3 1 5 0
0 1 1 3 2 1 1 0
Then the first non-zero from the left and the last one near the right is where you have to crop, in each direction.