Raphael / CoffeeScript attempt to constrain Y coordinate when dragging - coffeescript

I want to drag an image along a horizontal line. In other words, I want to ignore the Y value of the mouse movement. I also want to limit the range of the X values. Here is a CoffeeScript class that drags an image, but attempts unsuccessfully to constrain the image's Y value. The other problem with this code is that the image's X value seems to be twice what it should be.
class TripleSlider
circle = ""
# (#x,#y) is coordinate of upper left corner of component bounding box
constructor: (#editArea, #x, #y, #tsLabel, #showLimits=false) ->
dragger: (x, y) =>
x2 = Math.min(Math.max(x, 0), 127)
circle.attr({x: x2, y:0}) # does not do anything; I hoped it would constrain Y
drawRaphael: () =>
paper = Raphael(10, 50, 320, 200)
paper.fixNS()
paper.draggable.enable()
circle = paper.image("/assets/images/sliderTipDef.png", 0, 0, 13, 16).draggable.enable()
circle.drag(#dragger)
$ ->
tripleSlider = new TripleSlider($('#editArea'), 50, 100, "Attribute", true)
tripleSlider.draw()
BTW, I applied the patch to raphael.draggable.js by inserting the code below at line 13.
/** Fix from https://github.com/DmitryBaranovskiy/raphael/issues/409
* Just call it once after constructing paper:
*
* var paper = Raphael(0, 0, 300, 300);
* paper.fixNS();
* paper.draggable.enable();
*/
Raphael.fn.fixNS = function() {
var r = this;
for (var ns_name in Raphael.fn) {
var ns = Raphael.fn[ns_name];
if (typeof ns == 'object') for (var fn in ns) {
var f = ns[fn];
ns[fn] = function(){ return f.apply(r, arguments); }
}
}
};
Mike

Related

Optimize "range-join" in plain scala (not Spark!)

I have two ordered sequences, one (large) is range of positions, one (small) is a sequence of attributes, defined on position_from/position_two which I'd like to join.
So for each element of positions, I need to traverse the other sequences, which is not optimal
def interpolateCurveOnPos(position:Seq[Double], curveAttributes:Seq[CurveSegment]) = {
position.map { pos =>
// range join
val cs = curveAttributes.find(c => pos >= c.position_von && pos < c.position_bis).get
// interpolate curve attribute
val curve = cs.curve_von + (pos - cs.position_von) * (cs.curve_bis - cs.curve_von) / (cs.position_bis - cs.position_von)
return curve
}
What I've tried:
As the index at which the matching curveSegement is found will allways increase, I've introduced a some state variables to reduce the search of the correct entry
def interpolateCurveOnPos(position:Seq[Double], curveAttributes:Seq[CurveSegment]) = {
var idxSave = 0
var csSave : CurveSegment = curveAttributes.head
position.map { pos =>
// range join
val cs = curveAttributes.drop(idxSave).find(c => pos >= c.position_von && pos < c.position_bis).get
if(cs != csSave) {
csSave = cs
idxSave=idxSave+1
}
// interpolate
val curve = cs.curve_von + (pos - cs.position_von) * (cs.curve_bis - cs.curve_von) / (cs.position_bis - cs.position_von)
return curve
}
I wonder if there is a more elegent way to do it?

How can i fill gameobject to closed area with list and arrays?

Here is my code
if ((transform.position.x == 0 || transform.position.x == 100)
|| (transform.position.z == 0 || transform.position.z == 100))
{
float posX = transform.position.x;
float posZ = transform.position.z;
for (float x = posX; x >= 0; x -= 10)
{
for (float z = posZ; z >= 0; z -= 10)
{
Vector3 foundEmptyPosition = new Vector3(x * 10, 10, z * 10);
if (!IsExists(foundEmptyPosition))
{
PutACube(foundEmptyPosition);
}
}
}
}
Which algorithm do i use for filling?
Here is the example
What you do is asking for literally whole game logic. It's impossible.
In common, you have to have 2-dimensional array which will represent state of your game field.
Then you have to group non-painted indexes and simultaneously determine are they surrounded or not. This looks easy - while gathering information about group check nearby cube. To determine should a group be auto-painted each cube has to have either painted or non-painted cube. If you do find it touches the border or whatever else, you can understand whole group is not the one you are looking for.
Paint groups that marked as suitable for the painting
Here is my first thought
it has not been tested and may contain some bugs
definitions
"cell" means the ground that you put the cube on
"empty cell" means there is no cube on it
"occupied cell" means there is a cube on in
"captured cell" means the "empty cell" is surrounded by the "occupied cells" and should be filled
when you adding a new cube on a cell (empty cell -> occupied cell)
you only need to check the neighbor cells (4-way) of this occupied cell
that means when adding new cube on cell (x, z) = (2, 2)
start to check the neighbor cells (x, z) => (1, 2), (3, 2), (2, 1), (2, 3)
i call these neighbor cells as GROUP A
recursive check the neighbor cells of the GROUP A, if all check says the neighbor cells only has "occupied cell"
then all of the cells should be "captured"
Code Example
void FillCapturedCellAfterAddingCubeOnCell(Cell cell)
{
// the cell's hash code should be '(x,z)' or something
HashSet<Cell> checkedCells = new HashSet<Cell>();
checkedCells.Add(cell);
HashSet<Cell> connectedCells = new HashSet<Cell>();
List<Cell> neighbors = cell.GetNeighbors();
foreach (var neighborCell in neighbors)
{
if (checkedCells.Contains(neighborCell))
{
continue;
}
if (neighborCell.IsFilled)
{
continue;
}
bool captured = RCheckCellCaptured(neighbourCell, ref checkedCells, ref connectedCells);
if (captured)
{
// the connected cells is empty and should be captured
// fill all connected cells
foreach (var capturedCell in connectedCells)
{
}
}
checkedCells.Clear();
connectedCells.Clear();
}
}
bool RCheckCellCaptured(Cell cell, ref HashSet<Cell> checkedCells, ref HashSet<Cell> connectedCells)
{
checkedCells.Add(cell);
if (!cell.IsFilled)
{
connectedCells.Add(cell);
}
List<Cell> neighbors = cell.GetNeighbors();
foreach (var neighborCell in neighbors)
{
if (checkedCells.Contains(neighborCell))
{
continue;
}
// your game logic: if the cell is in some state,
// then that means all connected neighbors should not be captured
if (neighborCell.IsWall || neighborCell.EnemyOnIt)
{
return false;
}
if (!neighborCell.IsFilled)
{
bool captured = RCheckCellCaptured(neighborCell, ref checkedCells, ref connectedCells);
if (!captured)
{
return false;
}
}
}
return true;
}

PostgreSQL ST_AsMVT to VectorTiles to Leaflet Layer

I'm trying to create vector tiles from a PostgreSQL database and serve them via flask to a Leaflet map. I've followed this medium.com article which got me nearly all the way.
However, when i open the page with the Leaflet map on it I get the following in the browser console:
index.js:191 Uncaught Error: Unimplemented type: 4
at Pbf.skip (index.js:191)
at Pbf.readFields (index.js:41)
at new VectorTile$1 (vectortile.js:8)
at FileReader. (Leaflet.VectorGrid.Protobuf.js:124)
to create the tiles I use the below:
def tile_ul(x, y, z):
n = 2.0 ** z
lon_deg = x / n * 360.0 - 180.0
lat_rad = math.atan(math.sinh(math.pi * (1 - 2 * y / n)))
lat_deg = math.degrees(lat_rad)
return lon_deg,lat_deg
def get_tile(z,x,y):
xmin,ymin = tile_ul(x, y, z)
xmax,ymax = tile_ul(x + 1, y + 1, z)
tile = None
query = """SELECT ST_AsMVT(tile) FROM (SELECT id, ST_AsMVTGeom(geom, ST_Makebox2d(ST_transform(ST_SetSrid(ST_MakePoint(%s,%s),4326),3857),ST_transform(ST_SetSrid(ST_MakePoint(%s,%s),4326),3857)), 4096, 0, false) AS geom FROM "TimeZone (LineGridExp)") AS tile"""
cursor = db.connection.cursor()
cursor.execute(query,(xmin,ymin,xmax,ymax))
tile = str(cursor.fetchone()[0])
cursor.close()
return tile
#app.route('/tiles')
#app.route('/tiles/<int:z>/<int:x>/<int:y>', methods=['GET'])
def tiles(z=0, x=0, y=0):
tile = get_tile(z, x, y)
response = make_response(tile)
response.headers['Content-Type'] = "application/octet-stream"
return response
To add tiles to leaflet i use:
var url = "http://localhost:5000/tiles/{z}/{x}/{y}"
var mapillaryLayer = L.vectorGrid.protobuf(url).addTo(mymap);
The python end receives GET from the client and doesn't throw any errors.
However I'm not sure about the SQL query and detecting empty tiles or whether the query is simply wrong.
Any help would be greatly appreciated.
Tom
It was all a matter of the query not returning data and also returning as Bytes and not str:
def get_tile(z,x,y):
xmin, ymin = tile_ul(x, y, z)
xmax, ymax = tile_ul(x + 1, y + 1, z)
query = """SELECT ST_AsMVT(tile) FROM (
SELECT id, ST_AsMVTGeom(geom, ST_MakeEnvelope( %s, %s, %s, %s ,4326), 4096, 256, false ) geom
FROM reproject ) as tile"""
cursor = db.connection.cursor()
cursor.execute(query,(xmin,ymin,xmax,ymax))
tile = bytes(cursor.fetchone()[0])
cursor.close()
return tile

How to optimize this algorithm that find all maximal matching in a graph?

In my app people give grades to each other, out of ten point. Each day, an algorithm computes a match for as much people as possible (it's impossible to compute a match for everyone). It makes a graph where vertexes are users and edges are the grades
I simplify the problem by saying that if 2 people give a grade to each other, there is an edge between them with a weight of their respective grade average. But if A give a grade to B, but B doesnt, their is no edge between them and they can never match : this way, the graph is not oriented anymore
I would like that, in average everybody be happy, but in the same time, I would like as few as possible of people that have no match.
Being very deterministic, I made an algorithm that find ALL maximal matchings in a graph. I did that because I thought I could analyse all these maximal matchings and apply a value function that could look like :
V(Matching) = exp(|M| / max(|M|)) * sum(weight of all Edge in M)
That is to say, a matching is high-valued if its cardinal is close to the cardinal of the maximum matching, and if the sum of the grade between people is high. I put an exponential function to the ratio |M|/max|M| because I consider it's a big problem if M is lower that 0.8 (so the exp will be arranged to highly decrease V as |M|/max|M| reaches 0.8)
I would have take the matching where V(M) is maximal. Though, the big problem is that my function that computes all maximal matching takes a lot of time. For only 15 vertex and 20 edges, it takes almost 10 minutes...
Here is the algorithm (in Swift) :
import Foundation
struct Edge : CustomStringConvertible {
var description: String {
return "e(\(v1), \(v2))"
}
let v1:Int
let v2:Int
let w:Int?
init(_ arrint:[Int])
{
v1 = arrint[0]
v2 = arrint[1]
w = nil
}
init(_ v1:Int, _ v2:Int)
{
self.v1 = v1
self.v2 = v2
w = nil
}
init(_ v1:Int, _ v2:Int, _ w:Int)
{
self.v1 = v1
self.v2 = v2
self.w = w
}
}
let mygraph:[Edge] =
[
Edge([1, 2]),
Edge([1, 5]),
Edge([2, 5]),
Edge([2, 3]),
Edge([3, 4]),
Edge([3, 6]),
Edge([5, 6]),
Edge([2,6]),
Edge([4,1]),
Edge([3,5]),
Edge([4,2]),
Edge([7,1]),
Edge([7,2]),
Edge([8,1]),
Edge([9,8]),
Edge([11,2]),
Edge([11, 8]),
Edge([12,13]),
Edge([1,6]),
Edge([4,7]),
Edge([5,7]),
Edge([3,5]),
Edge([9,1]),
Edge([10,11]),
Edge([10,4]),
Edge([10,2]),
Edge([10,1]),
Edge([10, 12]),
]
// remove all the edge and vertex "touching" the edges and vertex in "edgePath"
func reduce (graph:[Edge], edgePath:[Edge]) -> [Edge]
{
var alreadyUsedV:[Int] = []
for edge in edgePath
{
alreadyUsedV.append(edge.v1)
alreadyUsedV.append(edge.v2)
}
return graph.filter({ edge in
return alreadyUsedV.first(where:{ edge.v1 == $0 }) == nil && alreadyUsedV.first(where:{ edge.v2 == $0 }) == nil
})
}
func findAllMaximalMatching(graph Gi:[Edge]) -> [[Edge]]
{
var matchings:[[Edge]] = []
var G = Gi // current graph (reduced at each depth)
var M:[Edge] = [] // current matching being built
var Cx:[Int] = [] // current path in the possibilities tree
// eg : Cx[1] = 3 : for the depth 1, we are at the 3th edge
var d:Int = 0 // current depth
var debug_it = 0
while(true)
{
if(G.count == 0) // if there is no available edge in graph, it means we have a matching
{
if(M.count > 0) // security, if initial Graph is empty we cannot return an empty matching
{
matchings.append(M)
}
if(d == 0)
{
// depth = 0, we cannot decrement d, we have finished all the tree possibilities
break
}
d = d - 1
_ = M.popLast()
G = reduce(graph: Gi, edgePath: M)
}
else
{
let indexForThisDepth = Cx.count > d ? Cx[d] + 1 : 0
if(G.count < indexForThisDepth + 1)
{
// depth ended,
_ = Cx.popLast()
if( d == 0)
{
break
}
d = d - 1
_ = M.popLast()
// reduce from initial graph to the decremented depth
G = reduce(graph: Gi, edgePath: M)
}
else
{
// matching not finished to be built
M.append( G[indexForThisDepth] )
if(indexForThisDepth == 0)
{
Cx.append(indexForThisDepth)
}
else
{
Cx[d] = indexForThisDepth
}
d = d + 1
G = reduce(graph: G, edgePath: M)
}
}
debug_it += 1
}
print("matching counts : \(matchings.count)")
print("iterations : \(debug_it)")
return matchings
}
let m = findAllMaximalMatching(graph: mygraph)
// we have compute all the maximal matching, now we loop through all of them to find the one that has V(Mi) maximum
// ....
Finally my question is : how can I optimize this algorithm to find all maximal matching and to compute my value function on them to find the best matching for my app in a polynomial time ?
I may be missing something since the question is quite complicated, but why not simply use maximum flow problem, with every vertex appearing twice and the edges weights are the average grading if exists? It will return the maximal flow if configured correctly and runs polynomial time.

Failing to draw on a Gtk.DrawingArea

I'm not able to draw, i've already read tutorials, i can't find the problem.
I simply have a correct UI drawed by Glade. Then i want to draw, for example 50 drawing areas. So i create a Grid with 50 cells; for each cell there is a vertical box (with a drawing area and a label inside each one). But i can't seen anything drawed.
class collega_GUI:
def __init__(self):
try:
self.__builder = Gtk.Builder()
self.__builder.add_from_file('UI2.glade')
self.__Grid = Gtk.Grid()
self.__Grid.set_margin_left(20)
self.__Grid.set_margin_right(20)
self.__Grid.set_row_spacing(10)
self.__Grid.set_column_spacing(15)
self.__Grid.set_column_homogeneous(True)
self.__GridBox = self.__builder.get_object('box11')
self.__GridBox.pack_end(self.__Grid, 1, 1, 20)
indirizzi_ip = []
for i in range(50):
indirizzi_ip.append(str(i))
cpu_info = {}
for ip in indirizzi_ip:
cpu_info[ip] = dict()
left = 0
right = 0
for ip in indirizzi_ip:
cpu_info[ip]['drawing_area'] = Gtk.DrawingArea()
cpu_info[ip]['drawing_area'].set_size_request(100, 100)
cpu_info[ip]['drawing_area'].set_name(ip)
box = Gtk.VBox(False, 5)
box.add(cpu_info[ip]['drawing_area'])
label = Gtk.Label(ip)
box.add(label)
self.__Grid.attach(box, left, right, 1, 1) #object,left,right,top,bottom
cpu_info[ip]['drawing_area'].connect("draw", self.__draw)
label.show()
cpu_info[ip]['drawing_area'].show() #the draw should start now!
box.show()
# 5 drawing areas in a row
left += 1
if left == 5:
right += 1
left = 0
self.__builder.get_object('Azioni_Window').show()
Gtk.main()
except Exception as xe:
logging.error('%s' % str(xe))
sys.exit()
def __draw(self, widget, context):
context.set_source_rgb(0.9, 0, 0.1) #rosso
context.set_source_rgb(0.1, 0.9, 0) #verde
context.set_source_rgb(0.8, 0.7, 0) #giallo
context.set_source_rgb(0.8, 0.7, 0.8) #inattivo
context.rectangle(0, 0, widget.get_allocated_width(), widget.get_allocated_height())
context.fill()
if __name__=='__main__':
try:
UINX=collega_GUI()
except Exception:
sys.exit()
You're missing
self.__Grid.show()
And hence nothing in the grid is shown.
In general it's easier to just call show_all() on some top-level container rather than trying to remember to show() every individual widget.