Related
VS Code has support for schemastore which gives you autocomplete in YAML files.
But VS Code does not detect the schema if the filename is different.
For example, if I edit .golangci.yaml the corresponding schema gets used. If I edit .golangci-foo.yaml the schema is not detected.
How can I enable the schema for files where the filename is different?
You can do this using the json.schemas setting. Like so:
"json.schemas": [
{
"fileMatch": [ "*tsconfig*.json" ],
"url": "http://json.schemastore.org/tsconfig",
},{
"fileMatch": [ "*cSpell.json" ],
"url": "https://raw.githubusercontent.com/streetsidesoftware/cspell/cspell4/cspell.schema.json",
},{
"fileMatch": [ "*.webmanifest" ],
"url": "http://json.schemastore.org/web-manifest",
},{
"fileMatch": [ "*package*.json" ],
"url": "https://json.schemastore.org/package",
}
],
As indicated in the comments, this worked for the asker, and they used the following:
"json.schemas": [
{
"fileMatch": [ "*.golangci*yaml" ],
"url": "https://json.schemastore.org/golangci-lint.json",
}
],
How to parse the below log line using grok
Also how to match the pattern of the date.
I tried %{TIMESTAMP_ISO8601:logtime} but no match
Log Line:
13-Nov-2019 00:00:20.230 DEBUG [[ACTIVE] ExecuteThread: '272' for queue: 'weblogic.kernel.Default (self-tuning)'] [196.157.7.12] 965929132 [wire] >> "[\n]"
The question is a bit unclear as to exactly what fields you want them mapped to.
So, here's what matches for me:
%{MONTHDAY:day}[-]%{MONTH:month}[-]%{YEAR:year} %{TIME:time} %{WORD:logtype} \[\[%{WORD:status}\] ExecuteThread: '%{NUMBER:threadNumber}' for queue: '%{GREEDYDATA:queueData}'\] \[%{IP:ip}\] %{NUMBER:numbers} \[%{WORD:text}\] >> "\[\\n\]"
The first 4 fields, answer your date/time pattern query and the rest is what I have used to fit the rest of the fields. Since, no exact mappings were provided , I have mapped them as per my understanding using
This is the output:
{
"day": [
[
"13"
]
],
"month": [
[
"Nov"
]
],
"year": [
[
"2019"
]
],
"time": [
[
"00:00:20.230"
]
],
"logtype": [
[
"DEBUG"
]
],
"status": [
[
"ACTIVE"
]
],
"threadNumber": [
[
"272"
]
],
"queueData": [
[
"weblogic.kernel.Default (self-tuning)"
]
],
"ip": [
[
"196.157.7.12"
]
],
"numbers": [
[
"965929132"
]
],
"text": [
[
"wire"
]
]
}
You can break 'time' further if you want. For any other combinations of patterns, refer Grok Patterns.
In OpenLayers 5.3.0, I've created a MultiPolygon using the 'difference' tool in turf.js. The turf.js MultiPolygon looks fine when I examine the JSON, but when I try to use that to create a feature in OpenLayers, I get "Uncaught TypeError: t.addEventListener is not a function".
I've tried many combinations of JSON.stringify, JSON.parse, GeoJSON.readFeatures, .getCoordinates()... I tried adding the turf.js MultiPolygon as a feature directly via source.addFeature(multiPolygonGeometry), but then I get 'Uncaught TypeError: e.getId is not a function'. I also tried source.addFeatures(multiPolygonGeometry) (note the plural 'addFeatures'), and that didn't give me any errors, but also didn't appear to add anything to the source.
Relevant lines in my code are as follows:
multiPolygonGeometry = turf.difference(largeArea,maskAreas);
multiPolygonFeature = new ol.Feature({
geometry: multiPolygonGeometry,
id: 'multiPolygonFeature1'
});
multiPolygonGeometry looks like this in the console:
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "MultiPolygon",
"coordinates": [
[
[
[
140.9716711384525,
-36.97645228850101
],
[
140.97418321565786,
-36.97679331852701
],
[
140.9741163253784,
-36.97713531664132
],
[
140.9740304946899,
-36.97903805606076
],
[
140.97437381744382,
-36.98025509866784
],
[
140.97594864874696,
-36.98127512642501
],
[
140.9714880598484,
-36.9804459718428
],
[
140.9714500775476,
-36.97642733756345
],
[
140.9716711384525,
-36.97645228850101
]
]
],
[
[
[
140.97455248763328,
-36.97684309230892
],
[
140.97751071844857,
-36.97723786980259
],
[
140.97749308140382,
-36.977304276099005
],
[
140.97715289421623,
-36.97770848336402
],
[
140.97661807025068,
-36.97969050789806
],
[
140.97628355026242,
-36.97958658471583
],
[
140.97634792327878,
-36.97900377288852
],
[
140.9764981269836,
-36.97866094031662
],
[
140.97510337829587,
-36.97727245260485
],
[
140.97455248763328,
-36.97684309230892
]
]
],
[
[
[
140.97628420893903,
-36.98092777726751
],
[
140.97617893060388,
-36.98131793226549
],
[
140.97596635572492,
-36.98127841787872
],
[
140.97628420893903,
-36.98092777726751
]
]
]
]
},
"ol_lm": {
"change": []
}
}
And then I get this message:
events.js:174 Uncaught TypeError: t.addEventListener is not a function
at v (events.js:174)
at e.handleGeometryChanged_ (Feature.js:210)
at e (events.js:41)
at e.dispatchEvent (Target.js:101)
at e.notify (Object.js:151)
at e.set (Object.js:170)
at e.setProperties (Object.js:186)
at new e (Feature.js:108)
at getPolygon (maskedPolygon.js:319) <-- this is the second line in my code sample above
at <anonymous>:1:1
What am I doing wrong here? I'm sure it's something simple but I just can't seem to crack this.
Turf works with GeoJSON features so your "multiPolygonGeometry" is a GeoJSON feature which can be parsed by OpenLayers then given an Id:
multiPolygonFeature = new ol.format.GeoJSON().readFeature(multiPolygonGeometry);
multiPolygonFeature.setId('multiPolygonFeature1');
I've uploaded a geojson file to my firebase realtime database and setup an http get request in my angular app to retrieve the data. Here's a small sample of my data. All it is is an array of objects. Each object is a zip code. Below are two zip code objects.
[
{
"type":"Feature",
"properties":{
"STATEFP10":"44",
"ZCTA5CE10":"02818",
"GEOID10":"4402818",
"CLASSFP10":"B5",
"MTFCC10":"G6350",
"FUNCSTAT10":"S",
"ALAND10":56516634,
"AWATER10":8882294,
"INTPTLAT10":"+41.6429192",
"INTPTLON10":"-071.4857186",
"PARTFLG10":"N"
},
"geometry":{
"type":"Polygon",
"coordinates":[
[
[
-71.457848,
41.672076
],
[
-71.457848,
41.672414
],
[
-71.457848,
41.672932
],
[
-71.457916,
41.67407
]
]
]
}
},
{
"type":"Feature",
"properties":{
"STATEFP10":"44",
"ZCTA5CE10":"02878",
"GEOID10":"4402878",
"CLASSFP10":"B5",
"MTFCC10":"G6350",
"FUNCSTAT10":"S",
"ALAND10":75250623,
"AWATER10":8363563,
"INTPTLAT10":"+41.6101375",
"INTPTLON10":"-071.1804709",
"PARTFLG10":"N"
},
"geometry":{
"type":"Polygon",
"coordinates":[
[
[
-71.210351,
41.656166
],
[
-71.209897,
41.657718
],
[
-71.208399,
41.660013
],
[
-71.20787,
41.661323
]
]
]
}
}
]
I can retrieve all of my data no problem but I can't seem to filter specific zip codes. I have read over the guides on firebase (https://firebase.google.com/docs/database/rest/retrieve-data) and can not get this to work with params. What I'm trying to do is filter an object based on the "ZCTA5CE10" value in "properties". I have tried many different combinations. Below is one of them:
getZipData() {
return this.http.get(this.url + '?orderBy="properties/ZCTA5CE10"&equalTo="02818"');
}
I get a 400 error when attempting to use params. What am I doing wrong?
Also, Is it possible to filter multiple zip codes E.g. &equalTo="02818"&equalTo="02878"
I am new to MongoDB and I am stuck trying to get unique subdocuments in an array.
A document in my collection looks like this:
{
"PubDate": "1/01/01 00:00",
"Title": "Identification of DNA-Dependent Protein Kinase Catalytic Subunit (DNA-PKcs) as a Novel Target of Bisphenol A",
"Datums": [
{
"evidence_id": "3515620_6",
"evidence": [
"\n\nTo examine the interaction between DNA-PKcs and Ku70/Ku80 more directly, we performed immunoprecipitation (IP) using FLAG-Ku70 or FLAG-Ku80 recombinants, which were expressed in 293T cells after IR-irradiation (Fig. 4B\n ) or UV-irradiation (Fig. 4C\n ). After IR-irradiation, co-precipitation of DNA-PKcs with Ku80 increased compared with that in the non-irradiated controls (Fig. 4B\n lanes 7 and 8)."
],
"map": {
"change": [
{
"Text": "increased"
}
],
"subject": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"treatment": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"assay": [
{
"Text": "copptby"
}
]
}
},
{
"evidence_id": "3515620_6",
"evidence": [
"\n\nTo examine the interaction between DNA-PKcs and Ku70/Ku80 more directly, we performed immunoprecipitation (IP) using FLAG-Ku70 or FLAG-Ku80 recombinants, which were expressed in 293T cells after IR-irradiation (Fig. 4B\n ) or UV-irradiation (Fig. 4C\n ). After IR-irradiation, co-precipitation of DNA-PKcs with Ku80 increased compared with that in the non-irradiated controls (Fig. 4B\n lanes 7 and 8)."
],
"map": {
"change": [
{
"Text": "increased"
}
],
"subject": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"treatment": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"assay": [
{
"Text": "copptby"
}
]
}
},
{
"evidence_id": "3515620_6",
"evidence": [
"\n\nTo examine the interaction between DNA-PKcs and Ku70/Ku80 more directly, we performed immunoprecipitation (IP) using FLAG-Ku70 or FLAG-Ku80 recombinants, which were expressed in 293T cells after IR-irradiation (Fig. 4B\n ) or UV-irradiation (Fig. 4C\n ). After IR-irradiation, co-precipitation of DNA-PKcs with Ku80 increased compared with that in the non-irradiated controls (Fig. 4B\n lanes 7 and 8)."
],
"map": {
"change": [
{
"Text": "increased"
}
],
"subject": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"treatment": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"assay": [
{
"Text": "copptby"
}
]
}
}
],
"Volume": "7",
"FullJournalName": "PLoS ONE",
"Authors": "Ito Y, Ito T, Karasawa S, Enomoto T, Nashimoto A, Hase Y, Sakamoto S, Mimori T, Matsumoto Y, Yamaguchi Y, Handa H",
"Issue": "12",
"Pages": "e50481",
"PMCID": "3515620"
}
In the above example, the "Datums" field has only one subdocument, but usually, the "Datums" field will have around 20-30 subdocuments. I want my MongoDB query to output documents (that satisfy certain criteria), where the "Datums" field will have unique subdocuments in its array. To do that I am using the following MongoDB query:
db.My_Datums.aggregate(
[
{ "$match": {
"Datums":
{
"$elemMatch":
{
"map.treatment.Entity.uniprotSym": { "$in": ["P33981", "P78527"] },
"map.assay.Text": "copptby"
}
}
}},
{ "$project": { "PMCID":1, "Title":1, "PubDate":1, "Volume":1, "Issue":1, "Pages":1, "FullJournalName":1, "Authors":1, "Datums.map.assay.Text":1, "Datums.map.change.Text":1, "Datums.map.subject.Entity.strings":1, "Datums.map.treatment.Entity.uniprotSym":1, "Datums.evidence_id":1, "_id":0 }},
{ "$unwind": "$Datums" },
{ "$match": { "Datums.map.treatment.Entity.uniprotSym": { "$in": ["P33981", "P78527"] }, "Datums.map.assay.Text": "copptby" }},
{ "$group": { "_id": "$PMCID", "Datums": { "$addToSet": "$Datums" }}}
]
#{ allowDiskUse: 1 }
)
But on running the above command, I am getting the below output:
{u'Datums': [{u'evidence_id': u'3515620_6',
u'map': {u'assay': [{u'Text': u'copptby'}],
u'change': [{u'Text': u'increased'}],
u'subject': [{u'Entity': {u'strings': u'dna-pkcs'}}],
u'treatment': [{u'Entity': {u'uniprotSym': u'P78527'}}]}},
{u'evidence_id': u'3515620_6',
u'map': {u'assay': [{u'Text': u'copptby'}],
u'change': [{u'Text': u'increased'}],
u'subject': [{u'Entity': {u'strings': u'dna-pkcs'}}],
u'treatment': [{u'Entity': {u'uniprotSym': u'P78527'}}]}},
{u'evidence_id': u'3515620_6',
u'map': {u'assay': [{u'Text': u'copptby'}],
u'change': [{u'Text': u'increased'}],
u'subject': [{u'Entity': {u'strings': u'dna-pkcs'}}],
u'treatment': [{u'Entity': {u'uniprotSym': u'P78527'}}]}}],
u'_id': u'3515620'}
What I am not understanding is why addToSet adding duplicate subdocuments to "Datums". Is there any way I can filter out the duplicates? What am I doing wrong in my query? I have searched a lot and read up a lot, but couldnt find any solution. Any MongoDB guru out there who could help this noob?? I will be eternally grateful to you!
Thanks in advance!