How to add method to Cairo.Context? - cairo

I've created the following function for drawing boxes in Cairo with rounded rectangles
void square (Context cr, int x, int y, int sizex, int sizey, int radius)
{
cr.move_to (x + radius, y);
cr.arc (x + sizex - radius, y + radius, radius, 1.5 * PI, 0);
cr.arc (x + sizex - radius, y + sizey - radius, radius, 0, 0.5 * PI);
cr.arc (x + radius, y + sizey - radius, radius, 0.5 * PI, PI);
cr.arc (x + radius, y + radius, radius, PI, 1.5 * PI);
}
This is a very C like way of doing it. I would prefer to do this in a more object orientated way. Like implementing the function as a method of Cairo.Context.

You can't add methods to existing classes without modifying the definition of that class (cairo.vapi in this case). What you can do, however, is subclass Cairo.Context and just use that instead of Cairo.Context. Something like this should do the trick:
[Compact]
public class Context : Cairo.Context {
public void square (int x, int y, int sizex, int sizey, int radius) {
this.move_to (x + radius, y);
this.arc (x + sizex - radius, y + radius, radius, 1.5 * Math.PI, 0);
this.arc (x + sizex - radius, y + sizey - radius, radius, 0, 0.5 * Math.PI);
this.arc (x + radius, y + sizey - radius, radius, 0.5 * Math.PI, Math.PI);
this.arc (x + radius, y + radius, radius, Math.PI, 1.5 * Math.PI);
}
public Context (Cairo.Surface target) {
base (target);
}
}

Related

Custom bottom Navigation Design

I am trying to implement custom curved shape like below left image.
Below is the code what I have achieved so far by using quadraticBezierTo:
class CustomNotchedShape extends NotchedShape {
final BuildContext context;
const CustomNotchedShape(this.context);
#override
Path getOuterPath(Rect host, Rect? guest) {
const radius = 110.0;
const lx = 40.0;
const ly = 20;
const bx = 10.0;
const by = 50.0;
var x = (MediaQuery.of(context).size.width - radius) / 2 - lx;
return Path()
..moveTo(host.left, host.top)
..lineTo(x, host.top)
..quadraticBezierTo(x + bx, host.top, x += lx, host.top - ly)
..quadraticBezierTo(
x + radius / 2, host.top - by, x += radius, host.top - ly)
..quadraticBezierTo((x += lx) - bx, host.top, x, host.top)
..lineTo(host.right, host.top)
..lineTo(host.right, host.bottom)
..lineTo(host.left, host.bottom);
}
}

iOS OpenGL ES 2.0 Quaternion Rotation Slerp to XYZ Position

I am following the quaternion tutorial: http://www.raywenderlich.com/12667/how-to-rotate-a-3d-object-using-touches-with-opengl and am trying to rotate a globe to some XYZ location. I have an initial quaternion and generate a random XYZ location on the surface of the globe. I pass that XYZ location into the following function. The idea was to generate a lookAt vector with GLKMatrix4MakeLookAt and define the end Quaternion for the slerp step from the lookAt matrix.
- (void)rotateToLocationX:(float)x andY:(float)y andZ:(float)z {
// Turn on the interpolation for smooth rotation
_slerping = YES; // Begin auto rotating to this location
_slerpCur = 0;
_slerpMax = 1.0;
_slerpStart = _quat;
// The eye location is defined by the look at location multiplied by this modifier
float modifier = 1.0;
// Create a look at vector for which we will create a GLK4Matrix from
float xEye = x;
float yEye = y;
float zEye = z;
//NSLog(#"%f %f %f %f %f %f",xEye, yEye, zEye, x, y, z);
_currentSatelliteLocation = GLKMatrix4MakeLookAt(xEye, yEye, zEye, 0, 0, 0, 0, 1, 0);
_currentSatelliteLocation = GLKMatrix4Multiply(_currentSatelliteLocation,self.effect.transform.modelviewMatrix);
// Turn our 4x4 matrix into a quat and use it to mark the end point of our interpolation
//_currentSatelliteLocation = GLKMatrix4Translate(_currentSatelliteLocation, 0.0f, 0.0f, GLOBAL_EARTH_Z_LOCATION);
_slerpEnd = GLKQuaternionMakeWithMatrix4(_currentSatelliteLocation);
// Print info on the quat
GLKVector3 vec = GLKQuaternionAxis(_slerpEnd);
float angle = GLKQuaternionAngle(_slerpEnd);
//NSLog(#"%f %f %f %f",vec.x,vec.y,vec.z,angle);
NSLog(#"Quat end:");
[self printMatrix:_currentSatelliteLocation];
//[self printMatrix:self.effect.transform.modelviewMatrix];
}
The interpolation works, I get a smooth rotation, however the ending location is never the XYZ I input - I know this because my globe is a sphere and I am calculating XYZ from Lat Lon. I want to look directly down the 'lookAt' vector toward the center of the earth from that lat/lon location on the surface of the globe after the rotation. I think it may have something to do with the up vector but I've tried everything that made sense.
What am I doing wrong - How can I define a final quaternion that when I finish rotating, looks down a vector to the XYZ on the surface of the globe? Thanks!
Is the following your meaning:
Your globe center is (0, 0, 0), radius is R, the start position is (0, 0, R), your final position is (0, R, 0), so rotate the globe 90 degrees around X-asix?
If so, just set lookat function eye position to your final position, the look at parameters to the globe center.
m_target.x = 0.0f;
m_target.y = 0.0f;
m_target.z = 1.0f;
m_right.x = 1.0f;
m_right.y = 0.0f;
m_right.z = 0.0f;
m_up.x = 0.0f;
m_up.y = 1.0f;
m_up.z = 0.0f;
void CCamera::RotateX( float amount )
{
Point3D target = m_target;
Point3D up = m_up;
amount = amount / 180 * PI;
m_target.x = (cos(PI / 2 - amount) * up.x) + (cos(amount) * target.x);
m_target.y = (cos(PI / 2 - amount) * up.y) + (cos(amount) * target.y);
m_target.z = (cos(PI / 2 - amount) * up.z) + (cos(amount) * target.z);
m_up.x = (cos(amount) * up.x) + (cos(PI / 2 + amount) * target.x);
m_up.y = (cos(amount) * up.y) + (cos(PI / 2 + amount) * target.y);
m_up.z = (cos(amount) * up.z) + (cos(PI / 2 + amount) * target.z);
Normalize(m_target);
Normalize(m_up);
}
void CCamera::RotateY( float amount )
{
Point3D target = m_target;
Point3D right = m_right;
amount = amount / 180 * PI;
m_target.x = (cos(PI / 2 + amount) * right.x) + (cos(amount) * target.x);
m_target.y = (cos(PI / 2 + amount) * right.y) + (cos(amount) * target.y);
m_target.z = (cos(PI / 2 + amount) * right.z) + (cos(amount) * target.z);
m_right.x = (cos(amount) * right.x) + (cos(PI / 2 - amount) * target.x);
m_right.y = (cos(amount) * right.y) + (cos(PI / 2 - amount) * target.y);
m_right.z = (cos(amount) * right.z) + (cos(PI / 2 - amount) * target.z);
Normalize(m_target);
Normalize(m_right);
}
void CCamera::RotateZ( float amount )
{
Point3D right = m_right;
Point3D up = m_up;
amount = amount / 180 * PI;
m_up.x = (cos(amount) * up.x) + (cos(PI / 2 - amount) * right.x);
m_up.y = (cos(amount) * up.y) + (cos(PI / 2 - amount) * right.y);
m_up.z = (cos(amount) * up.z) + (cos(PI / 2 - amount) * right.z);
m_right.x = (cos(PI / 2 + amount) * up.x) + (cos(amount) * right.x);
m_right.y = (cos(PI / 2 + amount) * up.y) + (cos(amount) * right.y);
m_right.z = (cos(PI / 2 + amount) * up.z) + (cos(amount) * right.z);
Normalize(m_right);
Normalize(m_up);
}
void CCamera::Normalize( Point3D &p )
{
float length = sqrt(p.x * p.x + p.y * p.y + p.z * p.z);
if (1 == length || 0 == length)
{
return;
}
float scaleFactor = 1.0 / length;
p.x *= scaleFactor;
p.y *= scaleFactor;
p.z *= scaleFactor;
}
The answer to this question is a combination of the following rotateTo function and a change to the code from Ray's tutorial at ( http://www.raywenderlich.com/12667/how-to-rotate-a-3d-object-using-touches-with-opengl ). As one of the comments on that article says there is an arbitrary factor of 2.0 being multiplied in GLKQuaternion Q_rot = GLKQuaternionMakeWithAngleAndVector3Axis(angle * 2.0, axis);. Remove that "2" and use the following function to create the _slerpEnd - after that the globe will rotate smoothly to XYZ specified.
// Rotate the globe using Slerp interpolation to an XYZ coordinate
- (void)rotateToLocationX:(float)x andY:(float)y andZ:(float)z {
// Turn on the interpolation for smooth rotation
_slerping = YES; // Begin auto rotating to this location
_slerpCur = 0;
_slerpMax = 1.0;
_slerpStart = _quat;
// Create a look at vector for which we will create a GLK4Matrix from
float xEye = x;
float yEye = y;
float zEye = z;
_currentSatelliteLocation = GLKMatrix4MakeLookAt(xEye, yEye, zEye, 0, 0, 0, 0, 1, 0);
// Turn our 4x4 matrix into a quat and use it to mark the end point of our interpolation
_slerpEnd = GLKQuaternionMakeWithMatrix4(_currentSatelliteLocation);
}

OpenGL ES: Translate a matrix to a particular point

Hi guys I am working on a app which requires the use of opengl es. However I have some questions. My task at hand is to rotate a matrix about an arbitrary point say (0,0,0). I did some research on google and the most common approach is
translate the matrix to (0,0,0)
Rotate the matrix
Translate the matrix back to its original position
Effectively
glTranslatef(centerX, centerY, centerZ);
glRotatef(angle, 0, 0, 1);
glTranslatef(-centerX, -centerY, -centerZ);
However my problem is I am using opengl es 2.0. The function translatef does not exist in opengl es 2.0. I have a function called as translateBy but I am unable to figure out how to use translateBy function to translate my matrix to a certain point
Thanks any help would be appreciated.
In OpenGL ES 2.0 you have to use vertex shader and just update the modelview matrix in every frame using
GLint modelviewmatrix = glGetUniformLocation(m_simpleProgram, "ModelviewMatrix");
matrx4 modelviewMatrix = rotation * translation;
glUniformMatrix4fv(modelviewmatrix, 1, 0, modelviewMatrix.Pointer());
assuming matrx4 as a matrix class of 4x4. and rotation and translation are the 4x4 matrix objects for rotation and translation.
Just make your own translate and rotate functions,
Translatef(x,y,z) is equivalent to
Matrx4 Translate( x, y, z)
{
Matrx4 m;
m = { 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
x, y, z, 1 }
return m;
}
and Rotatef(degree, vector3 axis) is equivalent to
Matrx4 Rotate( float degree, vector3 axis)
{
float radians = degrees * 3.14159f / 180.0f;
float s = std::sin(radians);
float c = std::cos(radians);
Matrx4 m = Identity(); /// load identity matrix
m[0] = c + (1 - c) * axis.x * axis.x;
m[1] = (1 - c) * axis.x * axis.y - axis.z * s;
m[2] = (1 - c) * axis.x * axis.z + axis.y * s;
m[4] = (1 - c) * axis.x * axis.y + axis.z * s;
m[5] = c + (1 - c) * axis.y * axis.y;
m[6] = (1 - c) * axis.y * axis.z - axis.x * s;
m[8] = (1 - c) * axis.x * axis.z - axis.y * s;
m[9] = (1 - c) * axis.y * axis.z + axis.x * s;
m[10] = c + (1 - c) * axis.z * axis.z;
return m;
}

What is kAccelerometerMinStep?

I have been looking at the Accelerometer Graph example in the iOS Developer library and I have a question about one of the variables that is used...
#define kAccelerometerMinStep 0.02
What is the Accelerometer Min Step? and what role does it have?
Here is how it is being used in the Low Pass Filter...
-(void)addAcceleration:(UIAcceleration*)accel
{
double alpha = filterConstant;
if(adaptive)
{
double d = Clamp(fabs(Norm(x, y, z) - Norm(accel.x, accel.y, accel.z)) / kAccelerometerMinStep - 1.0, 0.0, 1.0);
alpha = (1.0 - d) * filterConstant / kAccelerometerNoiseAttenuation + d * filterConstant;
}
x = accel.x * alpha + x * (1.0 - alpha);
y = accel.y * alpha + y * (1.0 - alpha);
z = accel.z * alpha + z * (1.0 - alpha);
}
And here is how it is being used in the High Pass Filter...
-(void)addAcceleration:(UIAcceleration*)accel
{
double alpha = filterConstant;
if(adaptive)
{
double d = Clamp(fabs(Norm(x, y, z) - Norm(accel.x, accel.y, accel.z)) / kAccelerometerMinStep - 1.0, 0.0, 1.0);
alpha = d * filterConstant / kAccelerometerNoiseAttenuation + (1.0 - d) * filterConstant;
}
x = alpha * (x + accel.x - lastX);
y = alpha * (y + accel.y - lastY);
z = alpha * (z + accel.z - lastZ);
lastX = accel.x;
lastY = accel.y;
lastZ = accel.z;
}
If someone could tell me what the min step is responsible for I would be very grateful...
I would like to capture accelerations ranging in magnitude from 0.05 to 2.00 g force with a frequency response of 0.25-2.50 Hz
Thanks.!

Drawing triangle/arrow on a line with CGContext

I am using the framework of route-me for working with locations.
In this code the path between two markers(points) will be drawn as a line.
My Question: "What code should I add if I want to add an arrow in the middle(or top) of the line, so that it points the direction"
Thanks
- (void)drawInContext:(CGContextRef)theContext
{
renderedScale = [contents metersPerPixel];
float scale = 1.0f / [contents metersPerPixel];
float scaledLineWidth = lineWidth;
if(!scaleLineWidth) {
scaledLineWidth *= renderedScale;
}
//NSLog(#"line width = %f, content scale = %f", scaledLineWidth, renderedScale);
CGContextScaleCTM(theContext, scale, scale);
CGContextBeginPath(theContext);
CGContextAddPath(theContext, path);
CGContextSetLineWidth(theContext, scaledLineWidth);
CGContextSetStrokeColorWithColor(theContext, [lineColor CGColor]);
CGContextSetFillColorWithColor(theContext, [fillColor CGColor]);
// according to Apple's documentation, DrawPath closes the path if it's a filled style, so a call to ClosePath isn't necessary
CGContextDrawPath(theContext, drawingMode);
}
- (void) drawLine: (CGContextRef) context from: (CGPoint) from to: (CGPoint) to
{
double slopy, cosy, siny;
// Arrow size
double length = 10.0;
double width = 5.0;
slopy = atan2((from.y - to.y), (from.x - to.x));
cosy = cos(slopy);
siny = sin(slopy);
//draw a line between the 2 endpoint
CGContextMoveToPoint(context, from.x - length * cosy, from.y - length * siny );
CGContextAddLineToPoint(context, to.x + length * cosy, to.y + length * siny);
//paints a line along the current path
CGContextStrokePath(context);
//here is the tough part - actually drawing the arrows
//a total of 6 lines drawn to make the arrow shape
CGContextMoveToPoint(context, from.x, from.y);
CGContextAddLineToPoint(context,
from.x + ( - length * cosy - ( width / 2.0 * siny )),
from.y + ( - length * siny + ( width / 2.0 * cosy )));
CGContextAddLineToPoint(context,
from.x + (- length * cosy + ( width / 2.0 * siny )),
from.y - (width / 2.0 * cosy + length * siny ) );
CGContextClosePath(context);
CGContextStrokePath(context);
/*/-------------similarly the the other end-------------/*/
CGContextMoveToPoint(context, to.x, to.y);
CGContextAddLineToPoint(context,
to.x + (length * cosy - ( width / 2.0 * siny )),
to.y + (length * siny + ( width / 2.0 * cosy )) );
CGContextAddLineToPoint(context,
to.x + (length * cosy + width / 2.0 * siny),
to.y - (width / 2.0 * cosy - length * siny) );
CGContextClosePath(context);
CGContextStrokePath(context);
}
The drawing of the actual triangle/arrow is easy once you have two points on your path.
CGContextMoveToPoint( context , ax , ay );
CGContextAddLineToPoint( context , bx , by );
CGContextAddLineToPoint( context , cx , cy );
CGContextClosePath( context ); // for triangle
Getting the points is a little more tricky. You said path was a line, as opposed to a curve or series of curves. That makes it easier.
Use CGPathApply to pick two points on the path. Probably, this is the last two points, one of which may be kCGPathElementMoveToPoint and the other will be kCGPathElementAddLineToPoint. Let mx,my be the first point and nx,ny be the second, so the arrow will point from m towards n.
Assuming you want the arrow at the tip of the line, bx,by from above will equal nx,ny on the line. Choose a point dx,dy between mx,my and nx,ny to calculate the other points.
Now calculate ax,ay and cx,cy such that they are on a line with dx,dy and equidistant from path. The following should be close, although I probably got some signs wrong:
r = atan2( ny - my , nx - mx );
bx = nx;
by = ny;
dx = bx + sin( r ) * length;
dy = by + cos( r ) * length;
r += M_PI_2; // perpendicular to path
ax = dx + sin( r ) * width;
ay = dy + cos( r ) * width;
cx = dx - sin( r ) * width;
cy = dy - cos( r ) * width;
Length is the distance from the tip of the arrow to the base, and width is distance from the shaft to the barbs, or half the breadth of the arrow head.
If path is a curve, then instead of finding mx,my as the previous point or move, it will be the final control point of the final curve. Each control point is on a line tangent to the curve and passing through the adjacent point.
I found this question as I had the same. I took drawnonward's example and it was so close... But with a flipping of cos and sin, I was able to get it to work:
r = atan2( ny - my , nx - mx );
r += M_PI;
bx = nx;
by = ny;
dx = bx + cos( r ) * length;
dy = by + sin( r ) * length;
r += M_PI_2; // perpendicular to path
ax = dx + cos( r ) * width;
ay = dy + sin( r ) * width;
cx = dx - cos( r ) * width;
cy = dy - sin( r ) * width;
Once I did that, my arrows were pointed exactly the wrong way. So I added that second line (r += M_PI;)
Thanks go to drawnonward!
And here is Swift 4+ version for Friedhelm Brügge answer: (I'll draw it on image)
func drawArrow(image: UIImage, ptSrc: CGPoint, ptDest: CGPoint) {
// create context with image size
UIGraphicsBeginImageContext(image.size)
let context = UIGraphicsGetCurrentContext()
// draw current image to the context
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
var slopY: CGFloat, cosY: CGFloat, sinY: CGFloat;
// Arrow size
let length: CGFloat = 35.0;
let width: CGFloat = 35.0;
slopY = atan2((ptSrc.y - ptDest.y), (ptSrc.x - ptDest.x));
cosY = cos(slopY);
sinY = sin(slopY);
//here is the tough part - actually drawing the arrows
//a total of 6 lines drawn to make the arrow shape
context?.setFillColor(UIColor.white.cgColor)
context?.move(to: CGPoint(x: ptSrc.x, y: ptSrc.y))
context?.addLine(to: CGPoint(x: ptSrc.x + ( -length * cosY - ( width / 2.0 * sinY )), y: ptSrc.y + ( -length * sinY + ( width / 2.0 * cosY ))))
context?.addLine(to: CGPoint(x: ptSrc.x + (-length * cosY + ( width / 2.0 * sinY )), y: ptSrc.y - (width / 2.0 * cosY + length * sinY )))
context?.closePath()
context?.fillPath()
context?.move(to: CGPoint(x: ptSrc.x, y: ptSrc.y))
context?.addLine(to: CGPoint(x: ptDest.x + (length * cosY - ( width / 2.0 * sinY )), y: ptDest.y + (length * sinY + ( width / 2.0 * cosY ))))
context?.addLine(to: CGPoint(x: ptDest.x + (length * cosY + width / 2.0 * sinY), y: ptDest.y - (width / 2.0 * cosY - length * sinY)))
context?.closePath()
context?.fillPath()
// draw current context to image view
imgView.image = UIGraphicsGetImageFromCurrentImageContext()
//close context
UIGraphicsEndImageContext()
}