i tried to run this mongodb query
db.building.update( { "_id":ObjectId("53041776a2de55000079b4ba") },
{ $set :
{"geometry":{"type":"Polygon","coordinates":[[[127.357858169667,36.36773567198263],[127.35825816966712,36.36773567198263],[127.35805432178199,36.3675356722397],[127.35825816966712,36.36733567198263],[127.357858169667,36.36733567198263],[127.357858169667,36.36773567198263],[127.357858169667,36.36773567198263]]]}}
}
)
result is
Exterior shell of polygon is invalid: { type: "Polygon", coordinates: [ [ [ 127.357858169667, 36.36773567198263 ], [ 127.3582581696671, 36.36773567198263 ], [ 127.358054321782, 36.3675356722397 ], [ 127.3582581696671, 36.36733567198263 ], [ 127.357858169667, 36.36733567198263 ], [ 127.357858169667, 36.36773567198263 ], [ 127.357858169667, 36.36773567198263 ] ] ] }
but GeoJSONLint test is a valid.
help me pls. thank you.
This error will occur if the polygon is malformed, for example:
coordinates are duplicates (the first and last coordinates have to be the same)
1/2 duplicate:
1/2-----3
| |
| |
| |
| |
| |
| |
0------4
if the lines of the polygon intersect each other (e.g. while trying to draw a pentagram)
intersection:
1 3
|\ /|
| \ / |
| \/ |
| /\ |
| / \ |
|/ \|
0/4 2
Duplicates are easy to detect, just check whether the first and last coordinate are are equal and remove one of them.
If you have an intersection in your polygon you can try and reorder the points (similar question: MongoDB Error: Exterior shell of polygon is invalid?).
If this dosen't work check if the GeoJSON is valid (from this question: Why is the exterior shell of this GeoJSON invalid?).
Related
`Reservation_branch_code | ON_ACCOUNT | Rcount
:-------------------------------------------------:
0 1101 | 170 | 5484
1 1103 | 101 | 5111
2 1118 | 1 | 232
3 1121 | 0 | 27
4 1126 | 90 | 191`
would like to chart sorted by "Rcount" and x axis is "Reservation_branch_code"
this below code gives me chart without Sorting by Rcount
base =alt.Chart(df1).transform_fold(
['Rcount', 'ON_ACCOUNT'],
as_=['column', 'value']
)
bars = base.mark_bar().encode(
# x='Reservation_branch_code:N',
x='Reservation_branch_code:O',
y=alt.Y('value:Q', stack=None), # stack =None enables layered bar
color=alt.Color('column:N', scale=alt.Scale(range=["#f50520", "#bab6b7"])),
tooltip=alt.Tooltip(['ON_ACCOUNT','Rcount']),
#order=alt.Order('value:Q')
)
text = base.mark_text(
align='center',
color='black',
baseline='middle',
dx=0,dy=-8, # Nudges text to right so it doesn't appear on top of the bar
).encode(
x='Reservation_branch_code:N',
y='value:Q',
text=alt.Text('value:Q', format='.0f')
)
rule = alt.Chart(df1).mark_rule(color='blue').encode(
y='mean(Rcount):Q'
)
(bars+text+rule).properties(width=790,height=330)
i sorted data in dataframe...used in that df in altair chart
but not found X axis is not sorted by Rcount column........Thanks
You can pass a list with the sort order:
import altair as alt
from vega_datasets import data
source = data.barley()
alt.Chart(source).mark_bar().encode(
x='sum(yield):Q',
y=alt.Y('site:N', sort=source['site'].unique().tolist())
)
I'm making a custom task to run Perl unit tests with yath. The output of that command contains details about failed tests, which I would like to filter and display as problems.
I've written the following matcher for the my output.
"problemMatcher": {
"owner": "yath",
"fileLocation": [ "relative", "${workspaceFolder}" ],
"severity": "error",
"pattern": [
{
"regexp": "\\[\\s*FAIL\\s*\\]\\s*job\\s*\\d+\\s*\\+?\\s*(.+)",
"message": 1,
},{
"regexp": "\\(\\s*DIAG\\s*\\)\\s*job\\s*\\d+\\s*\\+?\\s*at (.+) line (\\d+)\\.",
"file": 1,
"line": 2
}
]
}
This is supposed to match two different lines in the following output, which I will present as code for copying, and as a screenshot.
** Defaulting to the 'test' command **
( LAUNCH ) job 1 t/foo.t
( NOTE ) job 1 Seeded srand with seed '20220414' from local date.
[ PASS ] job 1 + passing test
[ FAIL ] job 1 + failing test
( DIAG ) job 1 Failed test 'failing test'
( DIAG ) job 1 at t/foo.t line 57.
[ PLAN ] job 1 Expected assertions: 2
( FAILED ) job 1 t/foo.t
( TIME ) job 1 Startup: 0.30841s | Events: 0.01992s | Cleanup: 0.00417s | Total: 0.33250s
< REASON > job 1 Test script returned error (Err: 1)
< REASON > job 1 Assertion failures were encountered (Count: 1)
The following jobs failed:
+--------------------------------------+-----------------------------------+
| Job ID | Test File |
+--------------------------------------+-----------------------------------+
| e7aee661-b49f-4b60-b815-f420d109457a | t/foo.t |
+--------------------------------------+-----------------------------------+
Yath Result Summary
-----------------------------------------------------------------------------------
Fail Count: 1
File Count: 1
Assertion Count: 2
Wall Time: 0.74 seconds
CPU Time: 0.76 seconds (usr: 0.20s | sys: 0.00s | cusr: 0.49s | csys: 0.07s)
CPU Usage: 103%
--> Result: FAILED <--
But it's actually pretty with colours.
I suspect there are ANSI escape sequences in this output. I could pass a flag to yath to make it not print colours, but I would like to be able to read this output as well, so that isn't ideal.
Do I have to change my pattern to match the escape sequences (I can read the source of the program that prints them, but it's annoying), or are they in fact stripped out and my pattern is wrong, but I can't see where?
Here's the first pattern as a regex101 match, and here's the second.
I want to build perf on Yocto (Zeus branch), for an image without python2. The recipe is this one:
https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/recipes-kernel/perf/perf.bb?h=zeus-22.0.4
Running this recipe yields this error:
| ERROR: Execution of '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/temp/run.do_compile.19113' failed with exit code 1:
| make: Entering directory '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/perf-1.0/tools/perf'
| BUILD: Doing 'make -j4' parallel build
| Warning: arch/x86/include/asm/disabled-features.h differs from kernel
| Warning: arch/x86/include/asm/required-features.h differs from kernel
| Warning: arch/x86/include/asm/cpufeatures.h differs from kernel
| Warning: arch/arm/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/arm64/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/powerpc/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/x86/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/x86/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/x86/include/uapi/asm/kvm_perf.h differs from kernel
| Warning: arch/x86/include/uapi/asm/svm.h differs from kernel
| Warning: arch/x86/include/uapi/asm/vmx.h differs from kernel
| Warning: arch/powerpc/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/s390/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/s390/include/uapi/asm/kvm_perf.h differs from kernel
| Warning: arch/s390/include/uapi/asm/sie.h differs from kernel
| Warning: arch/arm/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/arm64/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/x86/lib/memcpy_64.S differs from kernel
| Warning: arch/x86/lib/memset_64.S differs from kernel
|
| Auto-detecting system features:
| ... dwarf: [ on ]
| ... dwarf_getlocations: [ on ]
| ... glibc: [ on ]
| ... gtk2: [ OFF ]
| ... libaudit: [ OFF ]
| ... libbfd: [ on ]
| ... libelf: [ on ]
| ... libnuma: [ OFF ]
| ... numa_num_possible_cpus: [ OFF ]
| ... libperl: [ OFF ]
| ... libpython: [ on ]
| ... libslang: [ on ]
| ... libcrypto: [ on ]
| ... libunwind: [ on ]
| ... libdw-dwarf-unwind: [ on ]
| ... zlib: [ on ]
| ... lzma: [ on ]
| ... get_cpuid: [ OFF ]
| ... bpf: [ on ]
|
| Makefile.config:352: DWARF support is off, BPF prologue is disabled
| Makefile.config:547: Missing perl devel files. Disabling perl scripting support, please install perl-ExtUtils-Embed/libperl-dev
| Makefile.config:594: Python 3 is not yet supported; please set
| Makefile.config:595: PYTHON and/or PYTHON_CONFIG appropriately.
| Makefile.config:596: If you also have Python 2 installed, then
| Makefile.config:597: try something like:
| Makefile.config:598:
| Makefile.config:599: make PYTHON=python2
| Makefile.config:600:
| Makefile.config:601: Otherwise, disable Python support entirely:
| Makefile.config:602:
| Makefile.config:603: make NO_LIBPYTHON=1
| Makefile.config:604:
| Makefile.config:605: *** . Stop.
| Makefile.perf:205: recipe for target 'sub-make' failed
| make[1]: *** [sub-make] Error 2
| Makefile:68: recipe for target 'all' failed
| make: *** [all] Error 2
| make: Leaving directory '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/perf-1.0/tools/perf'
| WARNING: exit code 1 from a shell command.
|
ERROR: Task (/home/yocto/sources/poky/meta/recipes-kernel/perf/perf.bb:do_compile) failed with exit code '1'
NOTE: Tasks Summary: Attempted 1947 tasks of which 1946 didn't need to be rerun and 1 failed.
Looking at the recipe, libpython seems to be set?:
PACKAGECONFIG ??= "scripting tui libunwind"
PACKAGECONFIG[dwarf] = ",NO_DWARF=1"
PACKAGECONFIG[scripting] = ",NO_LIBPERL=1 NO_LIBPYTHON=1,perl python3"
# gui support was added with kernel 3.6.35
# since 3.10 libnewt was replaced by slang
# to cover a wide range of kernel we add both dependencies
PACKAGECONFIG[tui] = ",NO_NEWT=1,libnewt slang"
PACKAGECONFIG[libunwind] = ",NO_LIBUNWIND=1 NO_LIBDW_DWARF_UNWIND=1,libunwind"
PACKAGECONFIG[libnuma] = ",NO_LIBNUMA=1"
PACKAGECONFIG[systemtap] = ",NO_SDT=1,systemtap"
PACKAGECONFIG[jvmti] = ",NO_JVMTI=1"
# libaudit support would need scripting to be enabled
PACKAGECONFIG[audit] = ",NO_LIBAUDIT=1,audit"
PACKAGECONFIG[manpages] = ",,xmlto-native asciidoc-native"
Why does it not pick up the flag?
PACKAGECONFIG has scripting in it by default.
PACKAGECONFIG options are defined as following:
PACKAGECONFIG[f1] = "--with-f1, \
--without-f1, \
build-deps-for-f1, \
runtime-deps-for-f1, \
runtime-recommends-for-f1, \
packageconfig-conflicts-for-f1 \
"
PACKAGECONFIG[scripting] is set to ",NO_LIBPERL=1 NO_LIBPYTHON=1,perl python3". See the first comma here? It means that what is defined after is when scripting is not selected.
So if you do not want python dependency to be pulled, just set PACKAGECONFIG to a value without scripting in it.
Though I'm actually surprised the default does not build, that's definitely something that is tested by autobuilders. There's probably something else going on?
c.f.: https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#var-PACKAGECONFIG
I'm running a K-means clustering model, and I want to analyse the cluster centroids, however the centers output is a LIST of my 20 centroids, with their coordinates (8 each) as an ARRAY. I need it as a dataframe, with clusters 1:20 as rows, and their attribute values (centroid coordinates) as columns like so:
c1 | 0.85 | 0.03 | 0.01 | 0.00 | 0.12 | 0.01 | 0.00 | 0.12
c2 | 0.25 | 0.80 | 0.10 | 0.00 | 0.12 | 0.01 | 0.00 | 0.77
c3 | 0.05 | 0.10 | 0.00 | 0.82 | 0.00 | 0.00 | 0.22 | 0.00
The dataframe format is important because what I WANT to do is:
For each centroid
Identify the 3 strongest attributes
Create a "name" for each of the 20 centroids that is a concatenation of the 3 most dominant traits in that centroid
For example:
c1 | milk_eggs_cheese
c2 | meat_milk_bread
c3 | toiletries_bread_eggs
This code is running in Zeppelin, EMR version 5.19, Spark2.4. The model works great, but this is the boilerplate code from the Spark documentation (https://spark.apache.org/docs/latest/ml-clustering.html#k-means), which produces the list of arrays output that I can't really use.
centers = model.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
This is an excerpt of the output I get.
Cluster Centers:
[0.12391775 0.04282062 0.00368751 0.27282358 0.00533401 0.03389095
0.04220946 0.03213536 0.00895981 0.00990327 0.01007891]
[0.09018751 0.01354349 0.0130329 0.00772877 0.00371508 0.02288211
0.032301 0.37979978 0.002487 0.00617438 0.00610262]
[7.37626746e-02 2.02469798e-03 4.00944473e-04 9.62304581e-04
5.98964859e-03 2.95190585e-03 8.48736175e-01 1.36797882e-03
2.57451073e-04 6.13320072e-04 5.70559278e-04]
Based on How to convert a list of array to Spark dataframe I have tried this:
df = sc.parallelize(centers).toDF(['fresh_items', 'wine_liquor', 'baby', 'cigarettes', 'fresh_meat', 'fruit_vegetables', 'bakery', 'toiletries', 'pets', 'coffee', 'cheese'])
df.show()
But this throws the following error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
model.clusterCenters() gives you a list of numpy arrays and not a list of lists like in the answer you have linked. Just convert the numpy arrays to a lists before creating the dataframe:
bla = [e.tolist() for e in centers]
df = sc.parallelize(bla).toDF(['fresh_items', 'wine_liquor', 'baby', 'cigarettes', 'fresh_meat', 'fruit_vegetables', 'bakery', 'toiletries', 'pets', 'coffee', 'cheese'])
#or df = spark.createDataFrame(bla, ['fresh_items', 'wine_liquor', 'baby', 'cigarettes', 'fresh_meat', 'fruit_vegetables', 'bakery', 'toiletries', 'pets', 'coffee', 'cheese']
df.show()
I am trying to set a property of graphics object and when I execute my code, I am getting the list of properties of that graphics object. I tried to add semicolon in the end of set command but it is not helping me. Is there a way to avoid getting output of set command in command window?
BackgroundColor
Color
DisplayName
EdgeColor
Editing: [ on | off ]
FontAngle: [ {normal} | italic | oblique ]
FontName
FontSize
FontUnits: [ inches | centimeters | normalized | {points} | pixels ]
FontWeight: [ light | {normal} | demi | bold ]
HorizontalAlignment: [ {left} | center | right ]
LineStyle: [ {-} | -- | : | -. | none ]
LineWidth
Margin
Position
Rotation
String
Units: [ inches | centimeters | normalized | points | pixels | characters | {data} ]
Interpreter: [ latex | {tex} | none ]
VerticalAlignment: [ top | cap | {middle} | baseline | bottom ]
ButtonDownFcn: string -or- function handle -or- cell array
Children
Clipping: [ {on} | off ]
CreateFcn: string -or- function handle -or- cell array
DeleteFcn: string -or- function handle -or- cell array
BusyAction: [ {queue} | cancel ]
HandleVisibility: [ {on} | callback | off ]
HitTest: [ {on} | off ]
Interruptible: [ {on} | off ]
Parent
Selected: [ on | off ]
SelectionHighlight: [ {on} | off ]
Tag
UIContextMenu
UserData
Visible: [ {on} | off ]
I want to avoid getting this output in command window. I am using following code:
p = mtit ('Global Title') ;
set (p) ;
mtit is method found on Matlab Central FileExchange in order to display the common Title for subplots.
The command set(p) does not set any property of p. The correct syntax for actually setting a property of p is
set( p, 'PropertyName', Value )
When typing only set(p) you get all property-value information typed out (which you don't want)...