Game of Life cells as subpixels on an HTML canvas - leaflet

My Game of Life implementation displays cells on a canvas, using Leaflet to allow zooming and panning. At the highest zoom level, one cell corresponds to one pixel. But after zooming out once, 4 cells with coordinates (x,y); (x+1,y); (x,y+1); (x+1,y+1) correspond to the same pixel. If one of these cells is born,
ctx.fillRect(x, y, 1, 1);
makes the pixel black, if the cell later dies,
ctx.clearRect(x, y, 1, 1);
makes the pixel grey, not white again. This is also demonstrated by the following HTML code:
<!DOCTYPE html><html>
<head>
<script type="text/javascript">
function draw() {
var ctx = document.querySelector("canvas").getContext("2d");
ctx.setTransform(0.5, 0, 0, 0.5, 0, 0);
ctx.fillRect(0, 0, 1, 1);
ctx.clearRect(0, 0, 1, 1);
}
</script>
</head>
<body onload="draw()">
<canvas width="1" height="1"></canvas>
</body>
</html>
Filling a rectangle that covers less than one pixel and then clearing it again leaves the pixel grey.
In order to issue clearRect commands that cover a whole pixel, I would have to consider the status of all 22n cells at zoom level n when one of them dies. Are there better alternatives? My data structure is such that I look at only three x coordinates in parallel at any time.
(My code currently contains a clearPixel option that clears the pixel already if any of the 22n cells dies, because this looks somewhat better to me than the alternative.)

Related

Leaflet.js (or other solution) zoom to magnified pixels without blur

I've been using Leaflet to display raster images lately.
What I would like to do for a particular project is be able to zoom in to an image so the pixels become magnified on the screen in a sharply delineated way, such as you would get when zooming in to an image in Photoshop or the like. I would also like to retain, at some zoom level before maximum, a 1:1 correspondence between image pixel and screen pixel.
I tried going beyond maxNativeZoom as described here and here, which works but the interpolation results in pixel blurring.
I thought of an alternative which is to make the source image much larger using 'nearest neighbour' interpolation to expand each pixel into a larger square: when zoomed to maxNativeZoom the squares then look like sharply magnified pixels even though they aren't.
Problems with this are:
image size and tile count get out of hand quickly (original image is 4096 x 4096)
you never get the 'pop' of a 1:1 correspondence between image pixel and screen pixel
I have thought about using two tile sets: the first from the original image up to it's maxNativeZoom, and then the larger 'nearest neighbour' interpolated image past that, following something like this.
But, this is more complex, doesn't avoid the problem of large tile count, and just seems inelegant.
So:
Can Leaflet do what I need it to and if so how?
If not can you point me in the right direction to something that can (for example, it would be interesting to know how this is achieved)?
Many thanks
One approach is to leverage the image-rendering CSS property. This can hint the browser to use nearest-neighbour interpolation on <img> elements, such as Leaflet map tiles.
e.g.:
img.leaflet-tile {
image-rendering: pixelated;
}
See a working demo. Beware of incomplete browser support.
A more complicated approach (but one that works across more browsers) is to leverage WebGL; in particular Leaflet.TileLayer.GL.
This involves some internal changes to Leaflet.TileLayer.GL to support a per-tile uniform, most critically setting the uniform value to the tile coordinate in each tile render...
gl.uniform3f(this._uTileCoordsPosition, coords.x, coords.y, coords.z);
...having a L.TileLayer that "displays" a non-overzoomed tile for overzoomed tile coordinates (instead of just skipping the non-existent tiles)...
var hackishTilelayer = new L.TileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
'attribution': 'Map data © OpenStreetMap contributors',
maxNonPixelatedZoom: 3
});
hackishTilelayer.getTileUrl = function(coords) {
if (coords.z > this.options.maxNonPixelatedZoom) {
return this.getTileUrl({
x: Math.floor(coords.x / 2),
y: Math.floor(coords.y / 2),
z: coords.z - 1
});
}
// Skip L.TileLayer.prototype.getTileUrl.call(this, coords), instead
// apply the URL template directly to avoid maxNativeZoom shenanigans
var data = {
r: L.Browser.retina ? '#2x' : '',
s: this._getSubdomain(coords),
x: coords.x,
y: coords.y,
z: coords.z // *not* this._getZoomForUrl() !
};
var url = L.Util.template(this._url, L.Util.extend(data, this.options));
return url;
}
... plus a fragment shader that rounds down texel coordinates prior to texel fetches (plus a tile-coordinate-modulo-dependant offset), to actually perform the nearest-neighbour oversampling...
var fragmentShader = `
highp float factor = max(1., pow(2., uTileCoords.z - uPixelatedZoomLevel));
vec2 subtileOffset = mod(uTileCoords.xy, factor);
void main(void) {
vec2 texelCoord = floor(vTextureCoords.st * uTileSize / factor ) / uTileSize;
texelCoord.xy += subtileOffset / factor;
vec4 texelColour = texture2D(uTexture0, texelCoord);
// This would output the image colours "as is"
gl_FragColor = texelColour;
}
`;
...all tied together in an instance of L.TileLayer.GL (which syncs some numbers for the uniforms around):
var pixelated = L.tileLayer.gl({
fragmentShader: fragmentShader,
tileLayers: [hackishTilelayer],
uniforms: {
// The shader will need the zoom level as a uniform...
uPixelatedZoomLevel: hackishTilelayer.options.maxNonPixelatedZoom,
// ...as well as the tile size in pixels.
uTileSize: [hackishTilelayer.getTileSize().x, hackishTilelayer.getTileSize().y]
}
}).addTo(map);
You can see everything working together in this demo.

How do I repeat sprite horizontally ?

I have got the code to repeat X- and Y- which is:
bg = [CCSprite spriteWithFile:#"ipadbgpattern.png" rect:CGRectMake(0, 0, 3000, 3000)];
bg.position = ccp(500,500);
ccTexParams params = {GL_LINEAR,GL_LINEAR,GL_REPEAT,GL_REPEAT};
[bg.texture setTexParameters:&params];
[self addChild:bg];
However, I do not know how to change the params in order for the background to repeat along the horizontal axis.
There's no parameter for that. Just make sure the CGRect spans the region where you want the texture to repeat, and the texture itself must be a power of two (ie 1024x1024).
I'm guessing that maybe you're using a 1024x768 texture and then you'll see a gap between texture repeats.
This cannot be achieved at the GL level, since GL_REPEAT expects textures with power-of-two dimensions.
Take a look at my TiledSprite class for a rather unoptimized, but functional means of arbitrarily repeating an arbitrarily-sized texture or subtexture:
https://gist.github.com/Nolithius/6694990
Here's a brief look at its results and usage:
http://www.nolithius.com/game-development/cocos2d-iphone-repeating-sprite

CGAffineTransform help, flipping a label

I basically have a pie chart where I have lines coming out of each segment of the pie chart. So in the case where the line comes out of the circle to the left, when I draw my text, it is reversed. "100%" would look like => "%001" (Note, the 1 and % sign are actually drawn in reverse to, like if a mirror. So the little overhang on top of the 1 points to the right, rather than the left.)
I tried reading through Apple's docs for the AffineTransform, but it doesn't make complete sense to me. I tried making this transformation matrix to start:
CGAffineTransform transform1 = CGAffineTransformMake(-1, 0, 0, 1, 0, 0);
This does flip the text around its x-axis so the text now looks correct on the left side of the circle. However, the text is now on the line, rather than at the end of the line like it originally was. So I thought I could translate it by moving the text in the x-axis direction by changing the tx value in the matrix. So instead of using the above matrix, I used this:
CGAffineTransform transform1 = CGAffineTransformMake(-1, 0, 0, 1, -strlen(t1AsChar), 0);
However, the text just stays where it's at. What am I doing wrong? Thanks.
strlen() doesn't give you the size of the rendered text box, it just gives you the length of the string itself (how many characters that string has). If you're using a UITextField you can use textField.frame.size.width instead.

Iphone OpengGL : Editing the MODELVIEW_MATRIX

I working on a spinning 3D cube (glFrustumf setup) and it multiplies the current matrix by the previous one so that the cube continues to spin. See below
/* save current rotation state */
GLfloat matrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
/* re-center cube, apply new rotation */
glLoadIdentity();
glRotatef(self.angle, self.dy,self.dx,0);
glMultMatrixf(matrix);
The problem is I need to step back from this (as if I had a camera).
I tried to edit the matrix and that kind of works but picks up noise. The cube jumps around.
matrix[14] = -5.0;
matrix[13] = 0;
matrix[12] =0;
Is there a way to edit the current Modelview Matrix so that I can set the position of the cube with multiplying it by another matrix?
You should not mistreat OpenGL as a scene graph, nor a math library. That means: Don't read back the matrix, and multiply it arbitrarily back. Instead rebuild the whole matrix stack a new every time you do a render pass. I think I should point out, that in OpenGL-4 all the matrix functions have been removed. Instead you're expected to supply the matrices as uniforms.
EDIT due to comment by #Burf2000:
Your typical render handler will look something like this (pseudocode):
draw_object():
# bind VBO or plain VertexArrays (you might even use immediate mode, but that's deprecated)
# draw the stuff using glDrawArrays or better yet glDrawElements
render_subobject(object, parent_transform):
modelview = parent_tranform * object.transform
if OPENGL3_CORE:
glUniformMatrix4fv(object.shader.uniform_location[modelview], 1, 0, modelview)
else:
glLoadMatrixf(modelview)
draw_object(object)
for subobject in object.subobjects:
render_subobject(subobject, modelview)
render(deltaT, window, scene):
if use_physics:
PhysicsSimulateTimeStep(deltaT, scene.objects)
else:
for o in scene.objects:
o.animate(deltaT)
glClearColor(...)
glClearDepth(...)
glViewport(0, 0, window.width, window.height)
glDisable(GL_SCISSOR_TEST);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT)
# ...
# now _some_ objects' render pass - others may precede or follow, like for creating reflection cubemaps or water refractions.
glViewport(0, 0, window.width, window.height)
glEnable(GL_DEPTH_TEST)
glDepthMask(1)
glColorMask(1,1,1,1)
if not OPENGL3_CORE:
glMatrixMode(GL_PROJECTION)
glLoadMatrixf(scene.projection.matrix)
for object in scene.objects:
bind_shader(object.shader)
if OPENGL3_CORE:
glUniformMatrix4fv(scene.projection_uniform, 1, 0, scene.projection.matrix)
# other render passes
glViewport(window.HUD.x, window.HUD.y, window.HUD.width, window.HUD.height)
glStencil(window.HUD.x, window.HUD.y, window.HUD.width, window.HUD.height)
glEnable(GL_STENCIL_TEST)
glDisable(GL_DEPTH_TEST)
if not OPENGL3_CORE:
glMatrixMode(GL_PROJECTION)
glLoadMatrixf(scene.HUD.projection.matrix)
render_HUD(...)
and so on. I hope you get the general idea. OpenGL is neither a scene graph, nor a matrix manipulation library.

How to draw drop-down shadows in iOS

A CAShapeLayer uses a CGPathRef to draw it's stuff. So I have a star path, and I want a smooth drop shadow with a radius of about 15 units. Probably there is some nice functionality in some new iPhone OS versions, but I need to do it myself for a old aged version of 3.0 (which most people still use).
I tried to do some REALLY nasty stuff:
I created a for-loop and sequentially created like 15 of those paths, transform-scaling them step by step to become bigger. Then assigning them to a new created CAShapeLayer and decreasing it's alpha a little bit on every iteration. Not only that this scaling is mathematically incorrect and sucks (it should happen relative to the outline!), the shadow is not rounded and looks really ugly. That's why nice soft shadows have a radius.
The tips of a star shouldn't appear totally sharp after a shadow size of 15 units. They should be soft like cream. But in my ugly solution they're just as s harp as the star itself, since all I do is scale the star 15 times and decrease it's alpha 15 times. Ugly.
I wonder how the big guys do it? If you had an arbitrary path, and that path must throw a shadow, how does the algorithm to do that work? Probably the path would have to be expanded like 30 times, point-by-point relative to the tangent of the outline away from the filled part, and just by 0.5 units to have a nice blending.
Before I re-invent the wheel, maybe someone has a handy example or link?
A shadow is a translucent grayscale mask of the shape of an object blurred and offset.
CGContextSetShadowWithColor and CGContextSetShadow are how this is done on iPhone. You set the shadow then draw something and a shadow is also applied.
A CAShapeLayer does not have an easy option to apply a shadow. You will have to create a custom view or layer and set the shadow before drawing your shape.
I have not tried it, but the following might work:
#interface ShadowShapeLayer : CAShapeLayer
#end
#implementation ShadowShapeLayer
-(void) drawInContext:(CGContextRef)context {
CGContextSaveGState( context );
CGContextSetShadow( context , CGSizeMake( 5 , 5 ) , 15 );
[super drawInContext:context];
CGContextRestoreGState( context );
}
#end
Edit: Thanks Miser.
I asked myself the same question. I'm not an expert on this topic at all, but I had the following thought: Physically, one point of a drawing should result in a circular (or elliptical), semi-transparent shadow. So an entire drawing, which consists of multiple points, should result in the combination of a lot of such circular shadows.
So I painted a little shadow in Photoshop (brush tool, size 7, opacity 33%, color #3b3b3b). It's hardly visible:
Then I wrote a small HTML with Javascript just to try and see what it looks like (definitely not the ideal technique :-):
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<title>Title</title>
<script type="text/javascript" language="javascript">
function pageload() {
var drawingContainerElem = document.getElementById("drawing-container");
var shadowContainerElem = document.getElementById("shadow-container");
this.drawDot = function(x, y) {
var imgElem = document.createElement("img");
imgElem.style.left = x + "px";
imgElem.style.top = y + "px";
imgElem.src = "blue-dot.png";
drawingContainerElem.appendChild(imgElem);
}
this.drawShadow = function(x, y) {
var imgElem = document.createElement("img");
imgElem.style.left = x + "px";
imgElem.style.top = y + "px";
imgElem.src = "soft-shadow.png";
shadowContainerElem.appendChild(imgElem);
}
this.drawDotAndShadow = function(x, y) {
drawShadow(x - 5, y - 1);
drawDot(x, y);
}
for (var x = 50; x < 70; x ++) {
for (var y = 50; y < 58; y ++) {
drawDotAndShadow(x, y);
}
}
for (var x = 0; x < 15; x ++) {
for (var y = 0; y < x; y ++) {
drawDotAndShadow(69 + 15 - x, 54 + y);
drawDotAndShadow(69 + 15 - x, 54 - y);
}
}
}
</script>
<style type="text/css">
#drawing-container {
position: absolute;
left: 2em;
top: 2em;
width: 400px;
height: 400px;
z-index: 2;
}
#shadow-container {
position: absolute;
left: 2em;
top: 2em;
width: 400px;
height: 400px;
z-index: 1;
}
#drawing-container img {
position: absolute;
}
#shadow-container img {
position: absolute;
}
</style>
</head>
<body onload="javascript:pageload()">
<div id="drawing-container"></div>
<div id="shadow-container"></div>
</body>
</html>
This is the result:
Probably, there's a lot of room for optimization, and of course you wouldn't really render it using javascript in this way... Maybe you can find a way how to render this efficiently on the iPhone? If yes, let me know!
Possible improvements:
Make the center of the shadow circle darker (more opacity), and the rest lighter (less opacity) to achieve a core shadow.
Scale the shadow: Make it a bit smaller, to achieve a depth effect.