I am new to PERL and I try to get the following result in a loop:
# ResultFistStep.
$VAR1 = [
[
'Hello1'
],
[
'Hello2'
],
[
'Hello3'
],
];
But if i use a reference for the InnerArray \#InnerArray:
# Example1
my #OuterArray;
my #InnerArray;
foreach(1,2,3)
{
#InnerArray[0] = "Hello" . $_;
push(#OuterArray, \#InnerArray);
}
print Dumper \#OuterArray;
... i get this result:
$VAR1 = [
[
'Hello3'
],
$VAR1->[0],
$VAR1->[0]
];
If i try it without the reference:
# Example2
my #OuterArray;
my #InnerArray;
foreach(1,2,3)
{
#InnerArray[0] = "Hello" . $_;
push(#OuterArray, #InnerArray);
}
print Dumper \#OuterArray;
.. i get this result:
$VAR1 = [
'Hello1',
'Hello2',
'Hello3'
];
But what i want is the result shown at the beginning (ResultFistStep) and at some point in the end the following result (ResultFinally):
# ResultFinally
$VAR1 = [
[
'Hello1',
[
[],
[]
]
],
[
'Hello2',
[
[],
[]
]
],
[
'Hello3',
[
[],
[]
]
],
];
So the questions are:
How do i get this result for the ResultFirstStep done?
Can i solve the problem from ResultFinally with Perl?
Can please someone help me? I dont see the mistake.
Just use anonymous arrays:
my #outer;
push #outer, [ "Hello$_", [ [], [] ] ] for 1 .. 3;
or even
my #outer = map [ "Hello$_", [ [], [] ] ], 1 .. 3;
If you want to use the inner array, declare it inside the loop, otherwise you're reusing the same array again and again:
my #outer;
for (1 .. 3) {
my #inner = ( "Hello$_", [ [], [] ] );
push #outer, \#inner;
}
Related
I want a schema with an array containing arrays so I have the following schema defined:
runGroupEntries: [
[
{
type: mongoose.Types.ObjectId,
ref: 'User',
require: true
}
]
]
My intention is to have
runGroupEntries[['userId1', 'userId2', 'userId3'], ['userId4', 'userId5', 'userId6], ...]
I initialized the schema using:
for (let i = 0; i < numGroups; ++i) {
event.runGroupEntries.push(undefined);
}
In MongoDB Atlas, it shows:
initialization
It looks fine to me.
The way I insert element is
event.runGroupEntries[runGroup].push(userId);
In this example, runGroup is 0. I was expecting to see
runGroupEntries: [ [ null, "userId" ], [ null ], [ null ], [ null ], [ null ] ]
but the actual result is:
runGroupEntries: [ [ null, [Array] ], [ null ], [ null ], [ null ], [ null ] ],
1st push result
Then I tried to push another userId to event.runGroupEntries[0]. Interestingly, previous array now becomes "userId" but the new element been pushed still shows an array.
runGroupEntries: [
[ null, 5f5c1d95e4f678ce190d5624, [Array] ],
2nd push result
I am really clueless why the pushed element became an array. Any help would be appreciated!
It is indeed a bug in Mongoose. It's fixed now https://github.com/Automattic/mongoose/issues/9429
I have ~400K documents in a mongo collection, all with geometry of type:Polygon. It is not possible to add a 2dsphere index to the data as it currently stands because the geometry apparently has self-intersections.
In the past we had a hacky workaround which was to compute the bounding box of the geometry on a mongoose save hook and then index that rather than the geometry itself, but we would like to simplify things and just use the actual geometry.
So far I have tried using turf as follows (this is the body of a function called fix):
let geom = turf.polygon(geometry.coordinates);
geom = turf.simplify(geom, { tolerance: 1e-7 });
geom = turf.cleanCoords(geom);
geom = turf.unkinkPolygon(geom);
geom = turf.combine(geom);
return geom.features[0].geometry;
The most important function there is the unkinkPolygons which I hoped would do exactly what I wanted, i.e. make the geometry nice enough to be indexed. The simplify is possibly not helpful but I added it in for good measure. The clean is there because unkink complained about its input, and the combine is there to turn an array of Polygons into a single MultiPolygon. Actually, unkink still wasn't happy with it's inputs, so I had to write a hacky function as follows that jitters duplicated vertices, this modifies the geom before passing to unkink:
function jitterDups(geom) {
let coords = geom.geometry.coordinates;
let points = new Set();
for (let ii = 0; ii < coords.length; ii++) {
// last coords is allowed to match first, not sure if it must match.
let endsMatch = coords[ii][0].join(",") === coords[ii][coords[ii].length - 1].join(",");
for (let jj = 0; jj < coords[ii].length - (endsMatch ? 1 : 0); jj++) {
let str = coords[ii][jj].join(",");
while (points.has(str)) {
coords[ii][jj][0] += 1e-8; // if you make this too small it doesn't do the job
if (jj === 0 && endsMatch) {
coords[ii][coords[ii].length - 1][0] = coords[ii][jj][0];
}
str = coords[ii][jj].join(",");
}
points.add(str);
}
}
}
However, even after all of that mongo still complains.
Here is some sample raw Polygon input:
{ type: "Polygon", coordinates: [ [ [ -0.027542009179339, 51.5122867222457 ], [ -0.027535822940572, 51.512281465421 ], [ -0.027535925691804, 51.5122814221859 ], [ -0.027589474043984, 51.5122605515771 ], [ -0.027638484531731, 51.5122996934574 ], [ -0.027682911101528, 51.5123351881505 ], [ -0.027689915350493, 51.5123872384419 ], [ -0.027672409315982, 51.5123868001613 ], [ -0.027667905522642, 51.5123866344944 ], [ -0.027663068941865, 51.5123864992013 ], [ -0.02764931654289, 51.512375566682 ], [ -0.027552504539425, 51.5122983194123 ], [ -0.027542009179339, 51.5122867222457 ] ], [ [ -0.027542009179339, 51.5122867222457 ], [ -0.027557948301911, 51.5122984109658 ], [ -0.027560309178214, 51.5123001412876 ], [ -0.027542009179339, 51.5122867222457 ] ] ] }
And that same data after it has passed through the above fixing pipeline:
{ type: "MultiPolygon", coordinates: [ [ [ [ -0.027560309178214, 51.5123001412876 ], [ -0.02754202882236209, 51.51228674396312 ], [ -0.027542009179339, 51.5122867222457 ], [ -0.027535822940572, 51.512281465421 ], [ -0.027589474043984, 51.5122605515771 ], [ -0.027682911101528, 51.5123351881505 ], [ -0.027689915350493, 51.5123872384419 ], [ -0.027663068941865, 51.5123864992013 ], [ -0.027552504539425, 51.5122983194123 ], [ -0.02754202884162257, 51.51228674398443 ], [ -0.027557948301911, 51.5122984109658 ], [ -0.027560309178214, 51.5123001412876 ] ] ], [ [ [ -0.02754202884162257, 51.51228674398443 ], [ -0.02754202882236209, 51.51228674396312 ], [ -0.027541999179339, 51.5122867222457 ], [ -0.02754202884162257, 51.51228674398443 ] ] ] ] }
And here is the relevant bit of the error that is spat out by the index creation:
Edges 0 and 9 cross.
Edge locations in degrees: [-0.0275603, 51.5123001]-[-0.0275420, 51.5122867] and [-0.0275420, 51.5122867]-[-0.0275579, 51.5122984]
"code" : 16755,
"codeName" : "Location16755"
My question is: is there a bug in turf, or is it not doing what I need here in terms of keeping mongo happy? Also is there any documentation on exactly what the 2dshpere index needs in terms of "fixing"? Also, does anyone have suggestions as to what other tools I might use to fix the data, e.g. mapshaper or PostGIS's ST_MakeValid.
Note that once the existing data is fixed I also need a solution for fixing new data on the fly (ideally something that works nice with node).
Mongo Version: 3.4.14 (or any later 3.x)
The problem here is not that the polygon is intersecting itself, but rather that you have a (tiny) hole in the polygon, composed of 4 points, which shares a point with the exterior. So the hole "touches" the exterior, not intersects with it, but this is not allowed.
You can fix such cases using Shapely buffer with a tiny value, e.g.:
shp = shapely.geometry.shape({ "type": "Polygon", "coordinates": [ [ [ -0.027542009179339, 51.5122867222457 ], [ -0.027535822940572, 51.512281465421 ], [ -0.027535925691804, 51.5122814221859 ], [ -0.027589474043984, 51.5122605515771 ], [ -0.027638484531731, 51.5122996934574 ], [ -0.027682911101528, 51.5123351881505 ], [ -0.027689915350493, 51.5123872384419 ], [ -0.027672409315982, 51.5123868001613 ], [ -0.027667905522642, 51.5123866344944 ], [ -0.027663068941865, 51.5123864992013 ], [ -0.02764931654289, 51.512375566682 ], [ -0.027552504539425, 51.5122983194123 ], [ -0.027542009179339, 51.5122867222457 ] ], [ [ -0.027542009179339, 51.5122867222457 ], [ -0.027557948301911, 51.5122984109658 ], [ -0.027560309178214, 51.5123001412876 ], [ -0.027542009179339, 51.5122867222457 ] ] ] })
shp = shp.buffer(1e-12, resolution=0)
geojson = shapely.geometry.mapping(shp)
I am currently working with a .txt file containing data points of certain files.
Since the files are pretty big, are they been processed in smaller parts, but the output extracted from the processing process is not sorted in any order..
They are stored as such:
1_1_0_1_0_1_1_0_232 [
0 -19.72058 -18.89882 ]
1_0_0_0_0_0_0_0_0 [
-0.5940279 -1.949468 -1.185638 ]
1_0_1_1_0_1_1_1_100 [
-5.645662 -0.005585805 -6.196068 ]
1_0_1_1_0_1_1_1_101 [
-15.86037 -1.192093e-07 -18.77053 ]
1_0_1_1_0_1_1_1_102 [
-0.5648238 -1.970869 -1.230303 ]
1_0_1_1_1_0_1_0_103 [
-0.5750521 -1.946886 -1.222114 ]
1_0_1_1_1_0_1_0_104 [
-0.5926428 -1.941596 -1.191844 ]
1_0_1_1_1_0_1_0_105 [
-25.25665 0 -31.0921 ]
1_0_1_1_1_0_1_0_106 [
-0.001282441 -6.852591 -8.399776 ]
1_0_1_1_1_0_1_0_107 [
-0.0001649993 -8.857877 -10.69688 ]
1_0_1_1_1_0_1_0_108 [
-21.66693 0 -26.18516 ]
1_0_1_1_1_0_1_0_109 [
-5.444038 -0.004555213 -8.408965 ]
1_1_0_1_0_1_0_0_200 [
-4.023561 -0.01851013 -7.704897 ]
1_1_0_1_0_1_0_0_201 [
-0.443548 -3.057277 -1.167226 ]
1_1_0_1_0_1_0_0_202 [
-0.0001185011 -9.042104 -15.60585 ]
1_1_0_1_0_1_0_0_203 [
-5.960466e-07 -14.37778 -25.2224 ]
1_1_0_1_0_1_0_0_204 [
-0.5770675 -1.951139 -1.21623 ]
1_1_0_0_1_0_1_1_205 [
-0.5849463 -1.938798 -1.207353 ]
1_1_0_0_1_0_1_1_206 [
-0.5785673 -1.949474 -1.214192 ]
1_1_0_0_1_0_1_1_207 [
-27.21529 0 -32.21676 ]
1_1_0_0_1_0_1_1_208 [
-8.75938 -0.0001605878 -12.53627 ]
1_1_0_0_1_0_1_1_209 [
-1.281936 -0.3837854 -3.188763 ]
1_0_0_0_0_0_0_1_20 [
-0.2104172 -4.638866 -1.714325 ]
1_1_1_0_0_1_1_1_310 [
-11.71479 -9.298368e-06 -13.70222 ]
1_1_1_0_0_1_1_1_311 [
-24.71166 0 -30.45412 ]
1_1_1_0_0_1_1_1_312 [
-2.145031 -0.1357486 -4.617914 ]
1_1_1_0_0_1_1_1_313 [
-5.943637 -0.003112446 -7.630904 ]
1_1_1_0_0_1_1_1_314 [
0 -25.82314 -31.98673 ]
1_1_1_0_0_1_1_1_315 [
-8.178092e-05 -13.60563 -9.426649 ]
1_1_1_0_0_1_1_1_316 [
-0.00326875 -6.071715 -6.952539 ]
1_1_1_0_0_1_1_1_317 [
-17.92782 0 -24.64391 ]
1_1_1_0_0_1_1_1_318 [
-2.979753 -0.05447901 -6.11194 ]
1_1_1_0_0_1_1_1_319 [
-0.7661145 -1.118131 -1.568804 ]
1_0_0_0_0_0_0_1_31 [
-0.5749408 -1.961912 -1.215127 ]
1_0_0_0_0_0_0_0_10 [
-4.64927e-05 -9.977531 -20.60117 ]
1_0_1_1_1_1_0_1_120 [
-0.4925551 -1.135103 -2.694917 ]
1_0_1_1_1_1_0_1_131 [
-0.6127387 -1.958336 -1.148721 ]
1_1_0_0_0_0_0_1_142 [
-0.008494892 -6.882521 -4.901772 ]
1_1_0_0_0_1_1_1_153 [
0 -20.48085 -27.38916 ]
1_1_0_0_1_0_1_0_164 [
-0.5370184 -1.622399 -1.52286 ]
1_1_0_0_1_0_1_0_175 [
-24.08685 0 -29.42813 ]
1_1_0_0_1_1_1_0_186 [
-1.665665 -0.2307523 -4.074597 ]
1_0_0_0_0_0_0_0_1 [
-0.5880737 -1.945877 -1.198183 ]
1_1_0_0_1_0_1_1_210 [
-0.001396737 -6.574267 -21.30147 ]
1_1_0_1_0_1_1_0_221 [
-0.7456465 -1.893918 -0.980585 ]
1_0_0_0_0_0_1_1_42 [
-3.838613e-05 -10.23002 -13.01793 ]
1_0_0_0_0_0_1_1_43 [
-22.25132 0 -28.8467 ]
1_0_0_0_0_0_1_1_44 [
-6.688306 -0.001266626 -10.79875 ]
1_0_0_0_0_0_1_1_45 [
-0.429086 -2.197691 -1.436171 ]
1_0_0_0_0_0_1_1_46 [
-0.6683982 -1.928907 -1.072464 ]
1_0_0_0_1_0_0_1_47 [
-0.5767454 -1.972311 -1.206838 ]
1_0_0_0_1_0_0_1_48 [
-0.5789171 -1.965128 -1.206118 ]
1_0_0_0_1_0_0_1_49 [
-19.90514 0 -25.12686 ]
1_0_0_0_0_0_0_0_4 [
-4.768373e-07 -14.66496 -28.4888 ]
1_0_0_0_1_0_0_1_50 [
-0.01524216 -6.729354 -4.273614 ]
1_0_0_0_1_0_0_1_51 [
-3.576279e-07 -14.9054 -27.44406 ]
1_0_0_0_1_0_0_1_53 [
-0.003753785 -8.922103 -5.623135 ]
The format it is stored in is: <name>_<part> [<data points>]
I currently use a perl script to sort the datapoints.
perl -n00e '
while ( /([\d_]*)_(\d*) \s* \[ \s* (.*?) \s* \]/gmsx ) {
($name,$part,$datapoints) = ($1,$2,$3);
$hash{$name}{$part}=$datapoints;
}
while (($key,$v)=each %hash) {
print "$key [\n", (
map "${$v}{$_}\n", sort {$a<=>$b} keys %{$v}
), "]\n";
}
'
Which creates an output as such:
0_0_1_1_0_1_1_1 [
-0.5757762 -1.949812 -1.219321
-0.5732827 -1.974719 -1.212248
-0.005632018 -5.198827 -9.280998
-0.004484621 -7.180546 -5.595852
-1.776234e-05 -10.93515 -20.11548
-22.73301 0 -29.42717
-4.227753 -0.01532919 -7.374347
-3.396693 -0.05122549 -4.10732
-0.0008418526 -7.08029 -20.86733
-21.26725 0 -27.1029
-2.457597 -0.09611109 -5.11661
-5.492554 -0.00666456 -5.981491
-12.60927 -3.576285e-06 -15.31444
-0.5809742 -1.953598 -1.2077
-0.5807223 -1.969571 -1.200681
]
...
Which is correct, but the end square bracket should not be on a new line,
but be placed a space distance after the last data point has been printed.
It doesn't look like the Perl script itself explicitly makes a new line
but some of the commands are invoking a new line.. is it possible to negate this effect?
Here is what #bytepusher recommended in comments:
while (($key,$v)=each %hash) {
print "$key [\n", join("\n",
map "${$v}{$_}", sort {$a<=>$b} keys %{$v}
), " ]\n";
}
I have two lists a and b as follows:
a = ('church.n.01','church.n.02','church_service.n.01','church.n.04')
b = ('temple.n.01','temple.n.02','temple.n.03','synagogue.n.01')
I want to find relatedness between members of a and b using function get_relatedness(arg1,arg2). How can I operate on a and b in Perl so that I will pass all possible combinations between a and b using two nested for loops in Perl.
Please help me to solve this as I am new to Perl.
my #a = ('church.n.01','church.n.02','church_service.n.01','church.n.04');
my #b = ('temple.n.01','temple.n.02','temple.n.03','synagogue.n.01');
use Data::Dumper;
print Dumper [ get_relatedness(\#a, \#b) ];
sub get_relatedness {
my ($c, $d) = #_;
return map { my $t=$_; map [$t, $_], #$d } #$c;
}
output
$VAR1 = [
[
'church.n.01',
'temple.n.01'
],
[
'church.n.01',
'temple.n.02'
],
[
'church.n.01',
'temple.n.03'
],
[
'church.n.01',
'synagogue.n.01'
],
[
'church.n.02',
'temple.n.01'
],
[
'church.n.02',
'temple.n.02'
],
[
'church.n.02',
'temple.n.03'
],
[
'church.n.02',
'synagogue.n.01'
],
[
'church_service.n.01',
'temple.n.01'
],
[
'church_service.n.01',
'temple.n.02'
],
[
'church_service.n.01',
'temple.n.03'
],
[
'church_service.n.01',
'synagogue.n.01'
],
[
'church.n.04',
'temple.n.01'
],
[
'church.n.04',
'temple.n.02'
],
[
'church.n.04',
'temple.n.03'
],
[
'church.n.04',
'synagogue.n.01'
]
];
To compare all combinations of elements in the two arrays using two nested loops, you just need to loop through one and, for each element of the first array, do an inner loop over the elements of the second array:
my #a = ('church.n.01','church.n.02','church_service.n.01','church.n.04');
my #b = ('temple.n.01','temple.n.02','temple.n.03','synagogue.n.01');
my $relatedness;
for my $outer (#a) {
for my $inner (#b) {
$relatedness += get_relatedness($outer, $inner);
}
}
I have written a simple perl script:
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
$Data::Dumper::Pair = '';
$Data::Dumper::Sortkeys = 1;
$Data::Dumper::Terse = 1;
my #A=( [ 1,2 ],[ 3,4,5 ], [ 6,7,8 ]);
print Dumper(#A);
The output i get is :
> ./temp9.pl
[
1,
2
]
[
3,
4,
5
]
[
6,
7,
8
]
But what i need is the elements(arrays) to be separated with a comma , in between them.
I am much familiar in using Data:Dumper. Is there any fix for this?
The expected output is :
[
1,
2
],
[
3,
4,
5
],
[
6,
7,
8
]
Anothere question i have is is there any way in data dumper where i can add some text before each element in an array?for example here in array of array , can i add "xyz" before the opening brace of each array?
UPDATED#2
Try with print 'my $aRef = ', Dumper(\#A), ";";. It will print like this:
my $aRef = [
[
1,
2
],
[
3,
4,
5
],
[
6,
7,
8P
]
]
;
If You want to change the output of Dumper you can redirect the stdout to a variable (see open).
print map { "Something $_\n" } split "\n", Dumper(\#A);
Output:
Something [
Something [
Something 1,
Something 2
Something ],
Something [
Something 3,
Something 4,
Something 5
Something ],
Something [
Something 6,
Something 7,
Something 8
Something ]
Something ]