How to change constraint size in pymunk? - pymunk

I found out how to change the colour of constraints:
draw_options = pymunk.pygame_util.DrawOptions(screen)
draw_options.constraint_color = 200,200,200
But when drawing small objects, the size of the constraint appears to be too large and makes it look bad.
Is there a way to reduce the size of those pin joints? Instead of a radius of 5 pixels, I'd prefer 1 or 2 pixel radius joints/constraints.
An alternative was to make it partly transparent, but adding an alpha component to the colour doesn't seem to work.
draw_options.constraint_color = 200,200,200,50

Unfortunately the debug draw color for constraints doesnt work. https://github.com/viblo/pymunk/issues/160
But in general if you want special drawing its probably easiest to do it yourself. Its mainly meant for debugging and quick prototyping, so if you need more than whats included try drawing it yourself instead. There are some examples that does custom drawing and does not depend on the debug draw code.

Related

Is it possible to fill transparency with white in a texture in code?

I have some textures containing some transparency parts (a donut, for example, which would show a transparent center). What I want to do is fill the middle of the donut (or anything else) with a plain white, in code (I don't want to have a double of all my assets that need this tweak in one part of my game).
Is there a way to do this? Or do I really have to have 2 of each of my assets?
First it is possible to change a transparent texture to not-transparent, if it wasn't then graphic editors would be in trouble.
Solution 1 - Easy but takes repetitive editing by hand
The question you should be asking yourself is can you afford the transition at run time or would have two sets of textures be more efficient; from experience I find that the later tends to be more efficient.
Solution 2 - Extremely hard
You will need a shader that supports transparency and that it marks the sections that have to be shaded white. That is, it keeps track of which area will be later filled with white. It is implied that since your "donut" is already transparent on some parts then it already uses that texture that has an alpha, but you will have to write your own shader mask and be able to distinguish which is okay to fill white and which is not (fun problem here). What you need to do is find the condition in which that alpha no longer needs to be alpha and has to be white. Once the condition is met you can change the alpha of via the Color's alpha property. The only way I see you able to do this is if there is a pattern to the objects, so that you can apply some mathematical model to them and use that to find which area gets filled. If the objects are very different then the make two sets of textures starts to look more appealing.
Solution 3 - Medium with high re-use value
You could edit the textures to have two different colors, say pink and green. Green is the area that gets turned white and pink is always transparent. When green should not be white then it is transparent. You would have to edit your textures by hand as well.

How to draw variable-width UIBezierPath?

I'm wondering how I should go about drawing a uibezierpath where the stroke width peaks at the center of the arc. Here's an example of what I mean:
Either I have to go through each point when drawing, and set the stroke width accordingly, or there's an easier way. Can anyone point me in the right direction?
Thanks
You can just draw the two outer paths with no stroke, join them, and then fill in the space between them.
Another way to try this if you're interested:
I ended up getting this to work by creating a loop to draw a couple hundred line segments, and changing the line width accordingly during the draw loop.
To adjust the line width I used the following function: MAX_WIDTH * sinf(M_PI * (i/NUMBER_OF_SEGMENTS)
Looks great and no performance issues as far as I can tell. Worked out particularly well because I already had a list of the points to use on the curve. For other cases I'm guessing it would be better to use sosborn's method.

Composite Chart + Objective C

I want to implement below chart like structure.
Explanation:
1. Each block should be clickable.
2. If the block is selected, it will be highlighted(i.e. Red block in figure).
I initially google for this but was unable to find. What should be "Drawing logic" corresponding to this with animation?Thanx in advance.
I think you need to use MCSegmentedControl.
You can get it from here.
Generally speaking, I'd have an image for the outline with a transparent middle, then dynamically create colored blocks behind it of the appropriate colors, with dynamic labels. The highlighting is a little tricky, but could be done with a set of image overlays. One could also try to shrink and expand fixed images for the bars/highlighting, but iPhone scales images poorly.
(Will it always be 4 blocks? There are a couple of other ways to manage it using fixed-size images overlaying each other.)
Maybe you should look into using CALayer for this?
U need to implement this type of logic using button. Just scale button width according to percentage.
And to make round rect button like appearance use the code below and don't forget to import quartz-core framework in class file.
And to scale first and last button as you need some overlap from nearby button.
btn.layer.cornerRadius = 8.0;
btn.layer.borderWidth = 0.5;
btn.layer.borderColor = [[UIColor blackColor] CGColor];

iPad UIColor Saturation Issues

I am trying to draw a UIColor on the screen of a view-based app, and I am trying to do so using HSB. It is absolutely necessary for me to use HSB in this case. I can create a UIColor object with any S value from 0.0f to 0.75f, but past that the numerical changes have no effect on the actual saturation displayed. I need it to be 1.0f, but it is still using 0.75f. Any ideas on why it is doing that, and how I can make it work?
Because of how it works, + (UIColor *)colorWithHue:(CGFloat)hue saturation:(CGFloat)saturation brightness:(CGFloat)brightness alpha:(CGFloat)alpha actually does not use HSBA values internally; it is simply a wrapper around the device RGB color space.
I think that under extreme cases there surely would be chances that a constant H/B/A + a .75–1 S yields colors that differ so slightly it became imperceptible, despite the color components being digitally tracked as very precise floats. As saturation drops, the number of “available” colors decreases (as the display could only show this many colors, dropping the saturation compresses the usable colors) and the chance of collision simply rises.
Given that your scenario uses H0-1, B1, A1 colors which nearly invalidates my assumption, I was curious and have made a test project; the colors however worked correctly. I’m on iOS 4 SDK GM, so maybe it’ll help if we know which SDK you’re working against.
After doing some experimentation, I've discovered what my issue was.
I was using a for loop to draw single-pixel lines across a view, each with a hue value greater than the previous one. I was doing this to create a color spectrum to be used for a color picker.
My issue arose because I was using CGContext paths, not rects, to do the drawing. Paths, by default, "straddle" the created path with pixels. Because I was setting the width to one, CoreGraphics was forced to average between pixels, creating a desaturated effect. Setting the width to two set the saturation correctly, but the gradient of the spectrum was no longer smooth.
My fix for this issue was to use rects instead of paths. They did not blend between pixels, and the saturation issue was fixed.

iPhone: How to Determine Average Light/Dark of an Area of an UIImage

I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync.
I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values.
Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this?
I've thought of two possible approaches:
Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity?
Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure.
I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.
What about simply outlining your text in a way that it will show on both dark and light backgrounds?
This is how it is handled in other situations where text must be displayed over a background with unknown content (for example, films with subtitles).