Accessing full size (851x315) timeline cover photo via graph api with offset crop - facebook

This is a continuous question of https://stackoverflow.com/a/21074207/3269910
the solution seems fine c0.0.851.315
but the c0.0 values are pointing to 0x0 position of the images
what i want is the image with offset_y . if an offset_y: 20,
ie, i want the same size and part of image, what i am seeing in my page timeline in a api call.
Is it possible to do this ?
Note: if i change c0.0 the zero value to some other then it is pointing to pixels but offset_y is % i think.
Thanks for your help.

I had discussion with Facebook Platform Team;
They are not ready to share the manual calculation to us.
and yes, it is possible to calculate value of c0.XXXX manually using your API offset_y value.
Here is my PHP code.
$fb_page_cover_json = json_decode( file_get_contents( 'https://graph.facebook.com/11111111?fields=cover'));
$fb_cover_image_url = $fb_page_cover_json->cover->source;
$image_info = getimagesize($fb_cover_image_url);
$image_width = $image_info[0];
$image_height = $image_info[1];
echo getTop($fb_page_cover_json->cover->offset_y,$image_width,$image_height);
function getTop($offset_y,$image_width,$image_height) {
$cover_w = 851;
$cover_h = 315;
$img_w = $image_width;
$img_h = $image_height;
$real_img_h = ($cover_w * $img_h / $img_w) - $cover_h;
$result = ($real_img_h * $offset_y / 100 * -1);
return floor($result);
}
The method getTop will return the c0.XXX value.
Hope this helps some one.

Related

First time using cairo in AwesomeWM

This is for anyone who's having trouble getting started with cairo.
The documentation didn't give a good, complete example. That's why I wanted to share this.
I created a concrete example which you can put in your rc.lua and play with it.
local wibox = require('wibox')
local cairo = require("lgi").cairo
local surface = cairo.ImageSurface(cairo.Format.RGB24,20,20)
local cr = cairo.Context(surface)
my_wbox = wibox()
my_wbox.visible = true
my_wbox:set_bg("#ff0000")
cairo_widget = wibox.widget.base.make_widget()
cairo_widget.fit = function(context, width, height)
return 100, 100
end
cairo_widget.draw = function(self, my_wbox, cr, width, height)
cr:translate(100, 100)
cr:set_source_rgb(0,0,0)
cr:rectangle(0, 0, 100, 100)
cr:fill()
end
my_wbox:set_widget(cairo_widget)
my_wbox:geometry({x=50, y=50, width=500, height=500})
There is https://awesomewm.org/apidoc/documentation/04-new-widgets.md.html, but I guess that page does not have a "here is everything in one copy-able chunk" at the end.

I have trouble getting depth information from the DEPTH16 format with the Camera2 API using ToF on P30 pro

I am currently testing options for depth measurement with the smartphone and wanted to create a depth image initially for testing. I am using the Camera2Basic example as a basis for this. (https://github.com/android/camera-samples/tree/main/Camera2Basic) Using Depth16 I get a relatively sharp "depth image" back. But the millimetres are not correct. They are in a range around from 3600mm to 5000mm for an object like a wall that is about 500mm or 800mm away from the camera.
But what puzzles me the most is that the image does not transmit any information in the dark. If Android is really targeting the ToF sensor for DEPTH16, it shouldn't be affected in the dark, should it? Or do I have to use AR-Core or Huawei's HMS core to get a real ToF image?
I am using a Huawei P30 Pro and the code for extracting the depth information looks like this. And yes performance wise it is bullshit but it is only for testing purposes:)
private Map<String, PixelData> parseDepth16IntoDistanceMap(Image image) {
Map<String, PixelData> map = new HashMap();
Image.Plane plane = image.getPlanes()[0];
// using asShortBuffer() like in the documentation leads to a wrong format (for me) but does not help getting better values
ByteBuffer depthBuffer = plane.getBuffer().order(ByteOrder.nativeOrder());
int stride = plane.getRowStride();
int offset = 0;
int i = 0;
for (short y = 0; y < image.getHeight(); y++) {
for (short x = 0; x < image.getWidth(); x++) {
short depthSample = depthBuffer.getShort( (y / 2) * stride + x);
short depthSampleShort = (short) depthSample;
short depthRange = (short) (depthSampleShort & 0x1FFF);
short depthConfidence = (short) ((depthSampleShort >> 13) & 0x7);
float depthPercentage = depthConfidence == 0 ? 1.f : (depthConfidence - 1) / 7.f;
maxz = depthRange;
sum = sum + depthRange;
numPoints++;
listOfRanges.add((float) depthRange);
if (depthRange < minz && depthRange > 0) {
minz = depthRange;
}
map.put(x + "_" + y, new PixelData(x, y, depthRange, depthPercentage));
i++;
}
}
return map;
}
In any case, it would help a lot to know if you can get the data this way at all, so I know if I'm already doing something fundamentally wrong. Otherwise I will change to one of the ar systems. Either way, many thanks for your efforts
If you want to extract a depth map where you can see the distance to an object you might use ARCORE Depth API.
https://developers.google.com/ar/develop/java/depth/overview
Or you can follow the codelab where shows you how to get the data in millimeters.
https://codelabs.developers.google.com/codelabs/arcore-depth#0

as3 air for ios get screen device resolution

I am trying to set my mc to the exact screen size on an iPhone using air in as3.
I have tired:
import flash.display.Screen;
import flash.geom.Rectangle;
var mainScreen:Screen = Screen.mainScreen;
var screenBounds:Rectangle = mainScreen.bounds;
/// Does not work - comes out 320 by 480
w.text = ""+screenBounds.width+"";
h.text = ""+screenBounds.height+"";
BG.height = screenBounds.height;
BG.width = screenBounds.width;
/// Does not work - comes out 320 by 480
w.text = ""+Capabilities.screenResolutionX+"";
h.text = ""+Capabilities.screenResolutionY+"";
BG.height = Capabilities.screenResolutionY;
BG.width = Capabilities.screenResolutionX;
/// Does not work - comes out 320 by 480
w.text = ""+stage.fullScreenwidth+"";
h.text = ""+stage.fullScreenHeight+"";
BG.height = stage.fullScreenHeight;
BG.width = stage.fullScreenWidth;
/// Works BUT... it does NOT stretch to fit the stage.
/// I am starting with android screen size and it shrinks it proportionally
/// which means there is space on top or bottom or left side.
w.text = ""+stage.stageWidth+"";
h.text = ""+stage.stageHeight+"";
BG.height = stage.stageHeight;
BG.width= stage.stageWidth;
You can try to play around with stage.scaleMode :
stage.scaleMode = StageScaleMode.SHOW_ALL;
or
stage.scaleMode = StageScaleMode.EXACT_FIT;
If it is coming out 320x480 and you are saying that is wrong, can I assume you are using an iPhone 4S? If so, go into the App Descriptor XML file. Toward the bottom (generally one of the last items in the file), there should be this line.
<requestedDisplayResolution>standard</requestedDisplayResolution>
If it is commented out or set to "standard", this would be your issue. Change it to say "high" and that will enable Retina support. "standard" scales the non-retina display (320x480 on iPhone, 1024x768 on iPad) up. "high" will give you the correct display resolution.
Here is a detailed explanation:
http://www.adobe.com/devnet/air/articles/multiple-screen-sizes.html
if someone else is having issues getting the screen resolution for ios. What is happening is that there is no enough time between the completed loaded swf and the time you call the screen size.
For my app I need the swf with no scale because I am setting all my clips size and positions by setting a relative value to screen size in percent.
if my stage is 800x1280 and I have a clip with height = 100, width = 100;
then is the same to say that my clips.height = int(stage.stageWidth * 0.125);
as 100 * 100 / 800 = 12.5; | 100 is the 12.5% from 800;
The trick is that for your first frame (if you are using flash)
just set the stage scale and wait for full load the swf, and just that.
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;
var loadTimer:Timer = new Timer(10);
loadTimer.addEventListener(TimerEvent.TIMER, loading);
loadTimer.start();
function loading(evt){
if(root.loaderInfo.bytesLoaded == root.loaderInfo.bytesTotal){
loadTimer.stop();
loadTimer.removeEventListener(TimerEvent.TIMER, loading);
gotoAndPlay(2);
}
}
then in the frame 2 you can call the stage.stageWidth and it will give you the real size, as you have give enough time to the app get that value.
var stageW:int = stage.stageWidth;
var stageH:int = stage.stageHeight;
I hope it can help, thanks

PID controller in C# Micro Framework issues

I have built a tricopter from scratch based on a .NET Micro Framework board from TinyCLR.com. I used the FEZ Mini which runs at 72 MHz. Read more about my project at: http://bit.ly/TriRot.
So after a pre-flight check where I initialise and test each component, like calibrating the IMU and spinning each motor, checking that I get receiver data, etc., it enters a permanent loop which then calls the flight controller method on each loop.
I'm trying to tune my PID controller now using the Ziegler-Nichols method, but I am always getting a progressively larger overshoot. I was eventually able to get a [mostly] stable oscillation using proportional control only (setting Ki and Kd = 0); timing the period K with a stopwatch averaged out to 3.198 seconds.
I came across the answer (by Rex Logan) on a similar question by chris12892.
I was initially using the "Duration" variable in milliseconds which made my copter highly aggressive, obviously because I was multiplying the running integrator error by thousands on each loop. I then divided it by another thousand to bring it to seconds, but I'm still battling...
What I don't understand from Rex's answer is:
Why does he ignore the time variable in the integral and differential parts of the equations? Is that right or is it a typo?
What he means by the remark
In a normal sampled system the delta term would be one...
One what? Should this be one second under normal circumstances? What
if this value fluctuates?
My flight controller method is below:
private static Single[] FlightController(Single[] imuData, Single[] ReceiverData)
{
Int64 TicksPerMillisecond = TimeSpan.TicksPerMillisecond;
Int64 CurrentTicks = DateTime.Now.Ticks;
Int64 TickCount = CurrentTicks - PreviousTicks;
PreviousTicks = CurrentTicks;
Single Duration = (TickCount / TicksPerMillisecond) / 1000F;
const Single Kp = 0.117F; //Proportional Gain (Instantaneou offset)
const Single Ki = 0.073170732F; //Integral Gain (Permanent offset)
const Single Kd = 0.001070122F; //Differential Gain (Change in offset)
Single RollE = 0;
Single RollPout = 0;
Single RollIout = 0;
Single RollDout = 0;
Single RollOut = 0;
Single PitchE = 0;
Single PitchPout = 0;
Single PitchIout = 0;
Single PitchDout = 0;
Single PitchOut = 0;
Single rxThrottle = ReceiverData[(int)Channel.Throttle];
Single rxRoll = ReceiverData[(int)Channel.Roll];
Single rxPitch = ReceiverData[(int)Channel.Pitch];
Single rxYaw = ReceiverData[(int)Channel.Yaw];
Single[] TargetMotorSpeed = new Single[] { rxThrottle, rxThrottle, rxThrottle };
Single ServoAngle = 0;
if (!FirstRun)
{
Single imuRoll = imuData[1] + 7;
Single imuPitch = imuData[0];
//Roll ----- Start
RollE = rxRoll - imuRoll;
//Proportional
RollPout = Kp * RollE;
//Integral
Single InstanceRollIntegrator = RollE * Duration;
RollIntegrator += InstanceRollIntegrator;
RollIout = RollIntegrator * Ki;
//Differential
RollDout = ((RollE - PreviousRollE) / Duration) * Kd;
//Sum
RollOut = RollPout + RollIout + RollDout;
//Roll ----- End
//Pitch ---- Start
PitchE = rxPitch - imuPitch;
//Proportional
PitchPout = Kp * PitchE;
//Integral
Single InstancePitchIntegrator = PitchE * Duration;
PitchIntegrator += InstancePitchIntegrator;
PitchIout = PitchIntegrator * Ki;
//Differential
PitchDout = ((PitchE - PreviousPitchE) / Duration) * Kd;
//Sum
PitchOut = PitchPout + PitchIout + PitchDout;
//Pitch ---- End
TargetMotorSpeed[(int)Motors.Motor.Left] += RollOut;
TargetMotorSpeed[(int)Motors.Motor.Right] -= RollOut;
TargetMotorSpeed[(int)Motors.Motor.Left] += PitchOut;// / 2;
TargetMotorSpeed[(int)Motors.Motor.Right] += PitchOut;// / 2;
TargetMotorSpeed[(int)Motors.Motor.Rear] -= PitchOut;
ServoAngle = rxYaw + 15;
PreviousRollE = imuRoll;
PreviousPitchE = imuPitch;
}
FirstRun = false;
return new Single[] {
(Single)TargetMotorSpeed[(int)TriRot.LeftMotor],
(Single)TargetMotorSpeed[(int)TriRot.RightMotor],
(Single)TargetMotorSpeed[(int)TriRot.RearMotor],
(Single)ServoAngle
};
}
Edit: I found that I had two bugs in my code above (fixed now). I was integrating and differentiating with the last IMU values as opposed to the last error values. That got rid of the runaway sitation completely. The only problem now is that it seems to be a bit slow. When I perturb the system, it responds very quickly and stop it from continuing, but it takes a long time to get back to the setpoint (0), about 10 seconds or more. Is this now just down to tuning the PID? I'll give the suggestions below a go, and let you know if any of them make a difference.
One question I have is:
being a .NET board, I don't want to bank on any kind of accurate timing, so instead of trying to work out at what frequency I am executing that method, surely if I calculate the actual time and factor that into the equations, it should be better, or am I misunderstanding something?

Preserving colors during CMYK to RGB transformation in PIL

I'm using PIL to process uploaded images. Unfortunately, I'm having trouble with color conversion from CMYK to RGB, as the resulting images tone and contrast changes.
I'd suspect that it's only doing direct number transformations. Does PIL, or anything built on top of it, have an Adobian dummy-proof consume embedded profile, convert to destination, preserve numbers tool I can use for conversion?
In all my healthy ignorance and inexperience, this sort of jumped at me and it's got me in a pinch. I'd really like to get this done without engaging any intricacies of color spaces, transformations and the necessary math for both at this point.
Though I've never previously used it, I'm also disposed at using ImageMagick for this processing step if anyone has experience that it can perform it in a gracious manner.
So it didn't take me long to run into other people mentioning Little CMS, being the most popular open source solution for color management. I ended snooping around for Python bindings, found the old pyCMS and some ostensible notions about PIL supporting Little CMS.
Indeed, there is support for Little CMS, it's mentioned in a whole whopping one-liner:
CMS support: littleCMS (1.1.5 or later is recommended).
The documentation contains no references, no topical guides, Google didn't crawl out anything, their mailing list is closed... but digging through the source there's a PIL.ImageCms module that's well documented and get's the job done. Hope this saves someone from a messy internet excavation.
Goes off getting himself a cookie...
it's 2019 and things have changed. Your problem is significantly more complex than it may appear at first sight. The problem is, CMYK to RGB and RGB to CMYK is not a simple there and back. If e.g. you open an image in Photoshop and convert it there, this conversion has 2 additional parameters: source color profile and destination color profile. These change things greatly! For a typical use case, you would assume Adobe RGB 1998 on the RGB side and say Coated FOGRA 39 on the CMYK side. These two additional pieces of information clarify to the converter how to deal with the colors on input and output. What you need next is a transformation mechanism, Little CMS is in deed a great tool for this. It is MIT licensed and (after looking for solutions myself for a considerable time), I would recommend the following setup if you indeed do need a python way to transform colors:
Python 3.X (necessary because of littlecms)
pip install littlecms
pip install Pillow
In littlecms' /tests folder you will find a great set of examples. I would allow myself a particular adaptation of one test. Before you get the code, please let me tell you something about those color profiles. On Windows, as is my case, you will find a set of files with an .icc extension in the folder C:\Windows\System32\spool\drivers\color where Windows stores it's color profiles. You can download other profiles from sites like https://www.adobe.com/support/downloads/iccprofiles/iccprofiles_win.html and install them on Windows simply by double-clicking the corresponding .icc file. The example I provide depends on such profile files, which Little CMS uses to do those magic color transforms. I work as a semi-professional graphics designer and needed to be able to convert colors from CMYK to RGB and vice versa for certain scripts that manipulate objects in InDesign. My setup is RGB: Adobe RGB 1998 and CMYK: Coated FOGRA 39 (these settings were recommended by most book printers I get my books printed at). The aforementioned color profiles generated very similar results for me to the same transforms made by Photoshop and InDesign. Still, be warned, the colors are slightly (by around 1%) off in comparison to what PS and Id will give you for the same inputs. I am trying to figure out why...
The little program:
import littlecms as lc
from PIL import Image
def rgb2cmykColor(rgb, psrc='C:\\Windows\\System32\\spool\\drivers\\color\\AdobeRGB1998.icc', pdst='C:\\Windows\\System32\\spool\\drivers\\color\\CoatedFOGRA39.icc') :
ctxt = lc.cmsCreateContext(None, None)
white = lc.cmsD50_xyY() # Set white point for D50
dst_profile = lc.cmsOpenProfileFromFile(pdst, 'r')
src_profile = lc.cmsOpenProfileFromFile(psrc, 'r') # cmsCreate_sRGBProfile()
transform = lc.cmsCreateTransform(src_profile, lc.TYPE_RGB_8, dst_profile, lc.TYPE_CMYK_8,
lc.INTENT_RELATIVE_COLORIMETRIC, lc.cmsFLAGS_NOCACHE)
n_pixels = 1
in_comps = 3
out_comps = 4
rgb_in = lc.uint8Array(in_comps * n_pixels)
cmyk_out = lc.uint8Array(out_comps * n_pixels)
for i in range(in_comps):
rgb_in[i] = rgb[i]
lc.cmsDoTransform(transform, rgb_in, cmyk_out, n_pixels)
cmyk = tuple(cmyk_out[i] for i in range(out_comps * n_pixels))
return cmyk
def cmyk2rgbColor(cmyk, psrc='C:\\Windows\\System32\\spool\\drivers\\color\\CoatedFOGRA39.icc', pdst='C:\\Windows\\System32\\spool\\drivers\\color\\AdobeRGB1998.icc') :
ctxt = lc.cmsCreateContext(None, None)
white = lc.cmsD50_xyY() # Set white point for D50
dst_profile = lc.cmsOpenProfileFromFile(pdst, 'r')
src_profile = lc.cmsOpenProfileFromFile(psrc, 'r') # cmsCreate_sRGBProfile()
transform = lc.cmsCreateTransform(src_profile, lc.TYPE_CMYK_8, dst_profile, lc.TYPE_RGB_8,
lc.INTENT_RELATIVE_COLORIMETRIC, lc.cmsFLAGS_NOCACHE)
n_pixels = 1
in_comps = 4
out_comps = 3
cmyk_in = lc.uint8Array(in_comps * n_pixels)
rgb_out = lc.uint8Array(out_comps * n_pixels)
for i in range(in_comps):
cmyk_in[i] = cmyk[i]
lc.cmsDoTransform(transform, cmyk_in, rgb_out, n_pixels)
rgb = tuple(rgb_out[i] for i in range(out_comps * n_pixels))
return rgb
def rgb2cmykImage(PILImage, psrc='C:\\Windows\\System32\\spool\\drivers\\color\\AdobeRGB1998.icc', pdst='C:\\Windows\\System32\\spool\\drivers\\color\\CoatedFOGRA39.icc') :
ctxt = lc.cmsCreateContext(None, None)
white = lc.cmsD50_xyY() # Set white point for D50
dst_profile = lc.cmsOpenProfileFromFile(pdst, 'r')
src_profile = lc.cmsOpenProfileFromFile(psrc, 'r')
transform = lc.cmsCreateTransform(src_profile, lc.TYPE_RGB_8, dst_profile, lc.TYPE_CMYK_8,
lc.INTENT_RELATIVE_COLORIMETRIC, lc.cmsFLAGS_NOCACHE)
n_pixels = PILImage.size[0]
in_comps = 3
out_comps = 4
n_rows = 16
rgb_in = lc.uint8Array(in_comps * n_pixels * n_rows)
cmyk_out = lc.uint8Array(out_comps * n_pixels * n_rows)
outImage = Image.new('CMYK', PILImage.size, 'white')
in_row = Image.new('RGB', (PILImage.size[0], n_rows), 'white')
out_row = Image.new('CMYK', (PILImage.size[0], n_rows), 'white')
out_b = bytearray(n_pixels * n_rows * out_comps)
row = 0
while row < PILImage.size[1] :
in_row.paste(PILImage, (0, -row))
data_in = in_row.tobytes('raw')
j = in_comps * n_pixels * n_rows
for i in range(j):
rgb_in[i] = data_in[i]
lc.cmsDoTransform(transform, rgb_in, cmyk_out, n_pixels * n_rows)
for j in cmyk_out :
out_b[j] = cmyk_out[j]
out_row = Image.frombytes('CMYK', in_row.size, bytes(out_b))
outImage.paste(out_row, (0, row))
row += n_rows
return outImage
def cmyk2rgbImage(PILImage, psrc='C:\\Windows\\System32\\spool\\drivers\\color\\CoatedFOGRA39.icc', pdst='C:\\Windows\\System32\\spool\\drivers\\color\\AdobeRGB1998.icc') :
ctxt = lc.cmsCreateContext(None, None)
white = lc.cmsD50_xyY() # Set white point for D50
dst_profile = lc.cmsOpenProfileFromFile(pdst, 'r')
src_profile = lc.cmsOpenProfileFromFile(psrc, 'r')
transform = lc.cmsCreateTransform(src_profile, lc.TYPE_CMYK_8, dst_profile, lc.TYPE_RGB_8,
lc.INTENT_RELATIVE_COLORIMETRIC, lc.cmsFLAGS_NOCACHE)
n_pixels = PILImage.size[0]
in_comps = 4
out_comps = 3
n_rows = 16
cmyk_in = lc.uint8Array(in_comps * n_pixels * n_rows)
rgb_out = lc.uint8Array(out_comps * n_pixels * n_rows)
outImage = Image.new('RGB', PILImage.size, 'white')
in_row = Image.new('CMYK', (PILImage.size[0], n_rows), 'white')
out_row = Image.new('RGB', (PILImage.size[0], n_rows), 'white')
out_b = bytearray(n_pixels * n_rows * out_comps)
row = 0
while row < PILImage.size[1] :
in_row.paste(PILImage, (0, -row))
data_in = in_row.tobytes('raw')
j = in_comps * n_pixels * n_rows
for i in range(j):
cmyk_in[i] = data_in[i]
lc.cmsDoTransform(transform, cmyk_in, rgb_out, n_pixels * n_rows)
for j in rgb_out :
out_b[j] = rgb_out[j]
out_row = Image.frombytes('RGB', in_row.size, bytes(out_b))
outImage.paste(out_row, (0, row))
row += n_rows
return outImage
Something to note for anyone implementing this: you probably want to take the uint8 CMYK values (0-255) and round them into the range 0-100 to better match most color pickers and uses of these values. See my code here: https://gist.github.com/mattdesl/ecf305c2f2b20672d682153a7ed0f133