I have a scrolled window with a viewport, which is some 4000 pixels tall. Is there a way to figure out the mouse position in viewport coordinates? At the moment, what I get for the position is the position in the segment that I actually see, and if I scroll down to the bottom, I still get something like 600 pixels for the vertical position (that is the size of my window), instead of the 4000 that I expect. If there is no easy way, how can one determine by how much the scrolled window is scrolled? I believe, I could then put the pieces together.
Thanks,
v923z
You are looking for the get_vadjustment method of the scrolled_window.
For instance if you bind the button_press_event to the scrolled window:
def button_press_event(self, widget, event):
v_scroll_pos = widget.get_vadjustment().get_value()
print "y pos + scroll pos = %f" % (event.y + v_scroll_pos,)
return gtk.TRUE
Would print the y position + the amount it is scrolled in the y direction.
Related
I have a Matlab script that creates a Model Block for each element i found in a text file.
The problem is that all Models are created on each other in the window. So i'm trying to make a loop like:
for each element in text file
I add a Model block
I place right to the previous one
end
So it can look like this:
As you can see on the left, all models are on each other and I would like to place them like the one on the right.
I tried this:
m = mdlrefCountBlocks(diagrammeName)+500;
add_block('simulink/Ports & Subsystems/Model',[diagrammeName '/' component_NameValue]);
set_param(sprintf('%s/%s',diagrammeName,component_NameValue), 'ModelFile',component_NameValue);
size_blk = get_param(sprintf('%s/%s',diagrammeName,component_NameValue),'Position');
X = size_blk(1,1);
Y = size_blk(1,2);
Width = size_blk(1,3);
Height = size_blk(1,4);
set_param(sprintf('%s/%s',diagrammeName,component_NameValue),'Position',[X+m Y X+Width Y+Height]);
Inside the loop but it returns an error Invalid definition of rectangle. Width and height should be positive.
Thanks for helping!
The position property of a block does actually not contain its width and height, but the positions of the corners on the canvas (see Common Block Properties):
vector of coordinates, in pixels: [left top right bottom]
The origin is the upper-left corner of the Simulink Editor canvas before any canvas resizing. Supported coordinates are between -1073740824 and 1073740823, inclusive. Positive values are to the right of and down from the origin. Negative values are to the left of and up from the origin.
So change your code to e.g.:
size_blk = get_param(sprintf('%s/%s',diagrammeName,component_NameValue),'Position');
set_param(sprintf('%s/%s',diagrammeName,component_NameValue),'Position', size_blk + [m 0 0 0]);
Right now it's just bouncing back and forth across the screen
def moveTriangleTwo {
triTwo.translateX(triTwoDX)
if (triTwo.getX < 0.0) {
// It hit the left wall - go other direction
triTwo.setX(0.0) // Place it on left wall
triTwoDX = -triTwoDX // Move in opposite direction
} else if (triTwo.getX > -1) {
// It hit the right wall - go other direction
triTwo.setX(-1.0) // Place it on right wall
triTwoDX = -triTwoDX // Move in opposite directinectin
}
}
Perhaps you should read about vectors and coordinate systems.
The short answer is, on a computer screen, coordinate Y is the vertical axis, starting with 0 on top. The X coordinate is horizontal, starting at 0 on the left and increasing to the right.
For horizontal movement you need to change X, for vertical you change Y, for any diagonal you change both at once.
I'm attempting to detect if there's a sprite node immediately to the left or right of the current sprite node.
This seems straightforward, but I'm seeing an odd behaviour.
I've created a thin rect (width = 1point) that's the same height as the current node and with the same origin as the current node.
e.g:
// Create a thin rect that's aligned to the left edge of 'block'
CGRect adjacentFrame;
adjacentFrame = CGRectMake(block.frame.origin.x,
block.frame.origin.y,
1,
block.frame.size.height);
// Shift the rect left a few points to position it to the left of 'block'
adjacentFrame.origin.x -= 10;
Then I test to see if that rect (adjacentFrame) is intersecting a node:
SKPhysicsBody* obstructingBody;
obstructingBody = [self.physicsWorld bodyInRect:adjacentFrame];
Now, the weird thing is, obstructingBody contains 'block' itself!
I've even added code to add a SpriteNode to the scene with a frame of adjacentFrame so I can check the rect's positioning. It's clearly displaying a few points left of 'block' and is clearly not touching it!
Any ideas what could be going on here?
Thanks,
Chris
bodyInRect needs scene coordinates. You provide coordinates in the coordinate space of block.parent. Unless block.parent is the scene itself, you need to convert origin with:
CGRect blockFrame = block.frame;
blockFrame.origin = [self convertPoint:blockFrame.origin toNode:self.scene];
Also block's width must be less than 10 otherwise your -10 offset isn't enough to move the frame outside the block's frame.
I have a map app where the user can place waypoints manually. I would like for them to press the waypoint button and have a waypoint placed in the center of their currently visible view on the content view.
I'm afraid you'd have to calculate it yourself. contentSize returns size of the scrolled content, contentOffset gives you the origin of the scroll view inside the content. Then with scrollView.bounds.size you can find the center of the view.
Haven't tested this, but maybe you could convert scrollView.center to your scrolled map like this:
CGPoint viewportCenterInMapCoords =
[scrollView.superview convertPoint:scrollView.center
toView:mapViewInsideScrollView];
Need to account for how zoomed it is, then I can convert the content offset to the size of the full image and add some.
/// this is the full size of the map image
CGSize fullSize = CGPointMake(13900, 8400);
/// determines how the current content size compares to the full size
float zoomFactor = size.width/self.contentSize.width;
/// apply the zoom factor to the content offset , this basically upscales
/// the content offset to apply to the dimensions of the full size map
float newContentOffsetX = self.contentOffset.x*zoomFactor + (self.bounds.size.width/2) *zoomFactor-300;
float newContentOffsetY = self.contentOffset.y*zoomFactor + (self.bounds.size.height/2) * zoomFactor-300;
/// not sure why i needed to subtract the 300, but the formula wasn't putting
/// the point in the exact center, subtracting 300 put it there in all situations though
CGPoint point = CGPointMake(newContentOffsetX,newContentOffsetY );
I have a custom UIView which is drawn using its -[drawRect:] method.
The problem is that the anti-aliasing acts very weird as black lines horizontal or vertical lines are drawn very blurry.
If I disable anti-aliasing with CGContextSetAllowsAntialiasing, everything is drawn as expected.
Anti-Aliasing:
alt text http://dustlab.com/stuff/antialias.png
No Anti-Aliasing (which looks like the expected result with AA):
alt text http://dustlab.com/stuff/no_antialias.png
The line width is exactly 1, and all coordinates are integral values.
The same happens if I draw a rectangle using CGContextStrokeRect, but not if I draw exactly the same CGRect with UIRectStroke.
Since a stroke expands equal amounts to both sides, a line of one pixel width must not be placed on an integer coordinate, but at 0.5 pixels offset.
Calculate correct coordinates for stroked lines like this:
CGPoint pos = CGPointMake(floorf(pos.x) + 0.5f, floorf(pos.y) + 0.5f);
BTW: Don't cast your values to int and back to float to get rid of the decimal part. There's a function for this in C called floor.
in your view frames, you probably have float values that are not integers. While the frames are precise enough to do fractions of a pixel (float), you will get blurriness unless you cast to an int
CGRect frame = CGRectMake((int)self.frame.bounds..., (int)...., (int)...., (int)....);