How to draw drop-down shadows in iOS - iphone

A CAShapeLayer uses a CGPathRef to draw it's stuff. So I have a star path, and I want a smooth drop shadow with a radius of about 15 units. Probably there is some nice functionality in some new iPhone OS versions, but I need to do it myself for a old aged version of 3.0 (which most people still use).
I tried to do some REALLY nasty stuff:
I created a for-loop and sequentially created like 15 of those paths, transform-scaling them step by step to become bigger. Then assigning them to a new created CAShapeLayer and decreasing it's alpha a little bit on every iteration. Not only that this scaling is mathematically incorrect and sucks (it should happen relative to the outline!), the shadow is not rounded and looks really ugly. That's why nice soft shadows have a radius.
The tips of a star shouldn't appear totally sharp after a shadow size of 15 units. They should be soft like cream. But in my ugly solution they're just as s harp as the star itself, since all I do is scale the star 15 times and decrease it's alpha 15 times. Ugly.
I wonder how the big guys do it? If you had an arbitrary path, and that path must throw a shadow, how does the algorithm to do that work? Probably the path would have to be expanded like 30 times, point-by-point relative to the tangent of the outline away from the filled part, and just by 0.5 units to have a nice blending.
Before I re-invent the wheel, maybe someone has a handy example or link?

A shadow is a translucent grayscale mask of the shape of an object blurred and offset.
CGContextSetShadowWithColor and CGContextSetShadow are how this is done on iPhone. You set the shadow then draw something and a shadow is also applied.
A CAShapeLayer does not have an easy option to apply a shadow. You will have to create a custom view or layer and set the shadow before drawing your shape.
I have not tried it, but the following might work:
#interface ShadowShapeLayer : CAShapeLayer
#end
#implementation ShadowShapeLayer
-(void) drawInContext:(CGContextRef)context {
CGContextSaveGState( context );
CGContextSetShadow( context , CGSizeMake( 5 , 5 ) , 15 );
[super drawInContext:context];
CGContextRestoreGState( context );
}
#end
Edit: Thanks Miser.

I asked myself the same question. I'm not an expert on this topic at all, but I had the following thought: Physically, one point of a drawing should result in a circular (or elliptical), semi-transparent shadow. So an entire drawing, which consists of multiple points, should result in the combination of a lot of such circular shadows.
So I painted a little shadow in Photoshop (brush tool, size 7, opacity 33%, color #3b3b3b). It's hardly visible:
Then I wrote a small HTML with Javascript just to try and see what it looks like (definitely not the ideal technique :-):
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<title>Title</title>
<script type="text/javascript" language="javascript">
function pageload() {
var drawingContainerElem = document.getElementById("drawing-container");
var shadowContainerElem = document.getElementById("shadow-container");
this.drawDot = function(x, y) {
var imgElem = document.createElement("img");
imgElem.style.left = x + "px";
imgElem.style.top = y + "px";
imgElem.src = "blue-dot.png";
drawingContainerElem.appendChild(imgElem);
}
this.drawShadow = function(x, y) {
var imgElem = document.createElement("img");
imgElem.style.left = x + "px";
imgElem.style.top = y + "px";
imgElem.src = "soft-shadow.png";
shadowContainerElem.appendChild(imgElem);
}
this.drawDotAndShadow = function(x, y) {
drawShadow(x - 5, y - 1);
drawDot(x, y);
}
for (var x = 50; x < 70; x ++) {
for (var y = 50; y < 58; y ++) {
drawDotAndShadow(x, y);
}
}
for (var x = 0; x < 15; x ++) {
for (var y = 0; y < x; y ++) {
drawDotAndShadow(69 + 15 - x, 54 + y);
drawDotAndShadow(69 + 15 - x, 54 - y);
}
}
}
</script>
<style type="text/css">
#drawing-container {
position: absolute;
left: 2em;
top: 2em;
width: 400px;
height: 400px;
z-index: 2;
}
#shadow-container {
position: absolute;
left: 2em;
top: 2em;
width: 400px;
height: 400px;
z-index: 1;
}
#drawing-container img {
position: absolute;
}
#shadow-container img {
position: absolute;
}
</style>
</head>
<body onload="javascript:pageload()">
<div id="drawing-container"></div>
<div id="shadow-container"></div>
</body>
</html>
This is the result:
Probably, there's a lot of room for optimization, and of course you wouldn't really render it using javascript in this way... Maybe you can find a way how to render this efficiently on the iPhone? If yes, let me know!
Possible improvements:
Make the center of the shadow circle darker (more opacity), and the rest lighter (less opacity) to achieve a core shadow.
Scale the shadow: Make it a bit smaller, to achieve a depth effect.

Related

Game of Life cells as subpixels on an HTML canvas

My Game of Life implementation displays cells on a canvas, using Leaflet to allow zooming and panning. At the highest zoom level, one cell corresponds to one pixel. But after zooming out once, 4 cells with coordinates (x,y); (x+1,y); (x,y+1); (x+1,y+1) correspond to the same pixel. If one of these cells is born,
ctx.fillRect(x, y, 1, 1);
makes the pixel black, if the cell later dies,
ctx.clearRect(x, y, 1, 1);
makes the pixel grey, not white again. This is also demonstrated by the following HTML code:
<!DOCTYPE html><html>
<head>
<script type="text/javascript">
function draw() {
var ctx = document.querySelector("canvas").getContext("2d");
ctx.setTransform(0.5, 0, 0, 0.5, 0, 0);
ctx.fillRect(0, 0, 1, 1);
ctx.clearRect(0, 0, 1, 1);
}
</script>
</head>
<body onload="draw()">
<canvas width="1" height="1"></canvas>
</body>
</html>
Filling a rectangle that covers less than one pixel and then clearing it again leaves the pixel grey.
In order to issue clearRect commands that cover a whole pixel, I would have to consider the status of all 22n cells at zoom level n when one of them dies. Are there better alternatives? My data structure is such that I look at only three x coordinates in parallel at any time.
(My code currently contains a clearPixel option that clears the pixel already if any of the 22n cells dies, because this looks somewhat better to me than the alternative.)

Leaflet.js (or other solution) zoom to magnified pixels without blur

I've been using Leaflet to display raster images lately.
What I would like to do for a particular project is be able to zoom in to an image so the pixels become magnified on the screen in a sharply delineated way, such as you would get when zooming in to an image in Photoshop or the like. I would also like to retain, at some zoom level before maximum, a 1:1 correspondence between image pixel and screen pixel.
I tried going beyond maxNativeZoom as described here and here, which works but the interpolation results in pixel blurring.
I thought of an alternative which is to make the source image much larger using 'nearest neighbour' interpolation to expand each pixel into a larger square: when zoomed to maxNativeZoom the squares then look like sharply magnified pixels even though they aren't.
Problems with this are:
image size and tile count get out of hand quickly (original image is 4096 x 4096)
you never get the 'pop' of a 1:1 correspondence between image pixel and screen pixel
I have thought about using two tile sets: the first from the original image up to it's maxNativeZoom, and then the larger 'nearest neighbour' interpolated image past that, following something like this.
But, this is more complex, doesn't avoid the problem of large tile count, and just seems inelegant.
So:
Can Leaflet do what I need it to and if so how?
If not can you point me in the right direction to something that can (for example, it would be interesting to know how this is achieved)?
Many thanks
One approach is to leverage the image-rendering CSS property. This can hint the browser to use nearest-neighbour interpolation on <img> elements, such as Leaflet map tiles.
e.g.:
img.leaflet-tile {
image-rendering: pixelated;
}
See a working demo. Beware of incomplete browser support.
A more complicated approach (but one that works across more browsers) is to leverage WebGL; in particular Leaflet.TileLayer.GL.
This involves some internal changes to Leaflet.TileLayer.GL to support a per-tile uniform, most critically setting the uniform value to the tile coordinate in each tile render...
gl.uniform3f(this._uTileCoordsPosition, coords.x, coords.y, coords.z);
...having a L.TileLayer that "displays" a non-overzoomed tile for overzoomed tile coordinates (instead of just skipping the non-existent tiles)...
var hackishTilelayer = new L.TileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
'attribution': 'Map data © OpenStreetMap contributors',
maxNonPixelatedZoom: 3
});
hackishTilelayer.getTileUrl = function(coords) {
if (coords.z > this.options.maxNonPixelatedZoom) {
return this.getTileUrl({
x: Math.floor(coords.x / 2),
y: Math.floor(coords.y / 2),
z: coords.z - 1
});
}
// Skip L.TileLayer.prototype.getTileUrl.call(this, coords), instead
// apply the URL template directly to avoid maxNativeZoom shenanigans
var data = {
r: L.Browser.retina ? '#2x' : '',
s: this._getSubdomain(coords),
x: coords.x,
y: coords.y,
z: coords.z // *not* this._getZoomForUrl() !
};
var url = L.Util.template(this._url, L.Util.extend(data, this.options));
return url;
}
... plus a fragment shader that rounds down texel coordinates prior to texel fetches (plus a tile-coordinate-modulo-dependant offset), to actually perform the nearest-neighbour oversampling...
var fragmentShader = `
highp float factor = max(1., pow(2., uTileCoords.z - uPixelatedZoomLevel));
vec2 subtileOffset = mod(uTileCoords.xy, factor);
void main(void) {
vec2 texelCoord = floor(vTextureCoords.st * uTileSize / factor ) / uTileSize;
texelCoord.xy += subtileOffset / factor;
vec4 texelColour = texture2D(uTexture0, texelCoord);
// This would output the image colours "as is"
gl_FragColor = texelColour;
}
`;
...all tied together in an instance of L.TileLayer.GL (which syncs some numbers for the uniforms around):
var pixelated = L.tileLayer.gl({
fragmentShader: fragmentShader,
tileLayers: [hackishTilelayer],
uniforms: {
// The shader will need the zoom level as a uniform...
uPixelatedZoomLevel: hackishTilelayer.options.maxNonPixelatedZoom,
// ...as well as the tile size in pixels.
uTileSize: [hackishTilelayer.getTileSize().x, hackishTilelayer.getTileSize().y]
}
}).addTo(map);
You can see everything working together in this demo.

EaselJS shape x,y properties confusion

I generate a 4x4 grid of squares with below code. They all draw in correct position, rows and columns, on canvas on stage.update(). But the x,y coordinates for all sixteen of them on inspection are 0,0. Why? Does each shape has it's own x,y coordinate system? If so, if I get a handle to a shape, how do I determine where it was drawn originally onto the canvas?
The EaselJS documentation is silent on the topic ;-). Maybe you had to know Flash.
var stage = new createjs.Stage("demoCanvas");
for (i = 0; i < 4; i++) {
for (j = 0; j < 4; j++) {
var square = new createjs.Shape();
square.graphics.drawRect(i*100, j*100, 100, 100);
console.log("Created square + square.x + "," + square.y);
stage.addChild(square);
}
}
You are drawing the graphics at the coordinates you want, instead of drawing them at 0,0, and moving them using x/y coordinates. If you don't set the x/y yourself, it will be 0. EaselJS does not infer the x/y or width/height based on the graphics content (more info).
Here is an updated fiddle where the graphics are all drawn at [0,0], and then positioned using x/y instead: http://jsfiddle.net/0o63ty96/
Relevant code:
square.graphics.beginStroke("red").drawRect(0,0,100,100);
square.x = i * 100;
square.y = j * 100;

Double the size of the div to make pattern background-image work properly

I have a div element with a background-image. The image is a pattern which cannot be cut, so it always has to end to the bottom of the image. The image height is 400px. So I want the div to always be 400px, or higher, but it has to be divided by 400px so that the background image doesn't cut away when there will be text overflow. Example: 400px->800px->1200px etc.
if the content ain't changed after the page load u can use a simple javascript:
window.onload = function(){
var divs = document.getElementsByClass('exampleClass');
for(var i=0;i<divs.length ; i++)
if(divs[i].clientWidth % 400 != 0)
divs[i].clientWidth += 400 - (divs[i].clientWidth % 400);
}

Is CGContextAddArc really that slow (compared to a circle drawn with a few lines

Folks,
While coding up a few dials and sliders (e.g. like a big volume button one can rotate around) - I found that the standard CGContextAddArc() used like:
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGContextSetLineWidth(ctx, radius * (KE-KR)+8);
CGContextSetStrokeColorWithColor(ctx,self.foregroundColor.CGColor);
.... more some colour/width/etc settings
...
CGContextAddArc(ctx, dx,dy,radius, 0, 2*M_PI, 0);
to be unbelievable slow.
On an iPad - with a handful of filled/stroked circles, less than some 10 clean [self setNeedsDisplay] updates/second during drag. A very quick hack with a hand-drawn circle (shown below) was several orders of magnitude faster. Same applies to the emulator.
Why is this. Seems to be the case for both a normal fill and various gradient fills. What am I doing wrong ?
Dw.
// Stupid replacement for CGContectAddArc() which seems to be very slow.
//
void CGContextAddCirlce(CGContextRef ctx, float ox, float oy, float radius)
{
double len = 2 * M_PI * radius;
double step = 1.8 / len; // over the top :)
// translating/scaling would more efficient, etc..
//
float x = ox + radius;
float y = oy;
// stupid hack - should just do a quadrant and mirror twice.
//
CGContextMoveToPoint(ctx,x,y);
for(double a = step; a < 2.0 * M_PI -step; a += step) {
x = ox + radius * cos(a);
y = oy + radius * sin(a);
CGContextAddLineToPoint(ctx, x, y);
};
CGContextClosePath(ctx);
};
The vector drawing operations of Quartz 2D can be slow, which is why it is a good idea to redraw only when needed.
In your case, I would suggest drawing your volume button once, then transforming the UIView or CALayer into which you've drawn the button using a rotational transform. By simply moving, rotating, or scaling a view, you do not trigger an expensive redraw. The content is already cached as a texture, and the GPU can quickly manipulate and composite this rasterized content on top of your other views.
You'll find that avoiding redrawing in this manner will yield much improved performance.
Issue partly (mostly resolved).
Extensive benchmarking does show that AddArc is indeed slow compared to drawing a complete circle with a vector/straight-line path for circles in the 100-200 pixel radius range. For partial circles the effect is much less pronounced; am wondering if this is tied to the number of beziers.
BUT:
The code below did not compile as one would read it; M_PI was not the 3.14etc as actually expected by set to (3.14... * ((EVP_ARM7_ADJUST[(PLTF)])) by an included fixed-point DSP library (set to x100).
Hence it specified the end-arc double by a factor of 256 too large.
And it was the latter which did make the issue so noticeable (evidently the underlaying implementation just keeps going round and round and round..).
So issue now understood (and will keep an optimized/benchmarked version).
Thanks for the help!