Talend Extract Json String As Key value Array - talend

Hi I work with Talend ESB 7.2 and i try to work on tRest Response
I have Already extract the Json using the tJsonExtractField
In "metas" field i have a another json String and I try to extract this String as array (with key value association).
exemple of meta field
{
"482f75dd-a8e4-4f5c-80f0-85b5a7eeb340":[
"indéterminé"
],
"d8cac586-2d64-4fe6-bdf4-91a5bac3541e":[
"Id system"
],
"b17baa47-2aa4-4959-a4f1-073f97833ba2":[
"Intranet_Télé_services_Web"
],
"df35ac57-90eb-4e7f-af56-3f50be808ee1":[
"indéterminé"
],
"046f1767-f303-4f89-bed6-7c58dab5f47b":[
"indéterminé"
],
"ed979530-1dcc-4f48-853c-9dc43ebfc92d":[
"indéterminé"
],
"a19f08a9-edf1-489b-9fb3-17a6335df112":[
"indéterminé"
],
"60911596-67e7-4caf-afa1-67ff90c8fa8b":[
"indéterminé"
],
"0dee3ca9-7962-4ee5-bc02-89d80e315072":[
"saas"
],
"4e44268d-0231-4d1d-8e6d-13580dc89b31":[
"indéterminé"
],
"0fd60a21-0bb4-4e3f-a203-8cef11ad3149":[
"indéterminé"
],
"7c4b6cb3-1918-4ec5-85dc-add60186d29b":[
"indéterminé"
],
"48657111-f80f-4bd1-842d-0ce2c4a044b6":[
"indéterminé"
],
"6061a25d-3aaf-d58f-81a4-ad38878d2952":[
"indéterminé"
]
}
Edit 1 : Json.
I put you 2 exemple of 2000 items :
[
{
"icon":"fa fa-th",
"infra":false,
"dateMaj":1556582400000,
"meta":{
"482f75dd-a8e4-4f5c-80f0-85b5a7eeb340":[
"indéterminé"
],
"d8cac586-2d64-4fe6-bdf4-91a5bac3541e":[
"Abelium"
],
"b17baa47-2aa4-4959-a4f1-073f97833ba2":[
"indéterminé"
],
"df35ac57-90eb-4e7f-af56-3f50be808ee1":[
"Hyperfile"
],
"046f1767-f303-4f89-bed6-7c58dab5f47b":[
"indéterminé"
],
"ed979530-1dcc-4f48-853c-9dc43ebfc92d":[
"indéterminé"
],
"a19f08a9-edf1-489b-9fb3-17a6335df112":[
"indéterminé"
],
"60911596-67e7-4caf-afa1-67ff90c8fa8b":[
"indéterminé"
],
"0dee3ca9-7962-4ee5-bc02-89d80e315072":[
"indéterminé"
],
"4e44268d-0231-4d1d-8e6d-13580dc89b31":[
"Enfance"
],
"0fd60a21-0bb4-4e3f-a203-8cef11ad3149":[
"indéterminé"
],
"7c4b6cb3-1918-4ec5-85dc-add60186d29b":[
"indéterminé"
],
"48657111-f80f-4bd1-842d-0ce2c4a044b6":[
"indéterminé"
],
"6061a25d-3aaf-d58f-81a4-ad38878d2952":[
"19-LOG-037",
"19-LOG-042",
"19-LOG-043"
]
},
"history":{
},
"id":"60d62577-e852-4f1c-8e33-19f64a38c511",
"label":"DOMINO Web",
"cartoVersion":"3.15.10",
"code":"",
"description":"My app 1 ",
"teamleader":"nicolas.gentil#abc.com",
"businesses":[
"e99164a0-c331-4fc0-9f90-af47e535b2c7",
"94f526e9-f983-4278-99ed-59de62619cee",
"d72828f9-2dd2-4ca7-bcd1-f22923ad09b3"
]
},
{
"icon":"fa fa-th",
"infra":false,
"dateMaj":1521676800000,
"meta":{
"482f75dd-a8e4-4f5c-80f0-85b5a7eeb340":[
"indéterminé"
],
"d8cac586-2d64-4fe6-bdf4-91a5bac3541e":[
"Id system"
],
"b17baa47-2aa4-4959-a4f1-073f97833ba2":[
"Intranet_Télé_services_Web"
],
"df35ac57-90eb-4e7f-af56-3f50be808ee1":[
"indéterminé"
],
"046f1767-f303-4f89-bed6-7c58dab5f47b":[
"indéterminé"
],
"ed979530-1dcc-4f48-853c-9dc43ebfc92d":[
"indéterminé"
],
"a19f08a9-edf1-489b-9fb3-17a6335df112":[
"indéterminé"
],
"60911596-67e7-4caf-afa1-67ff90c8fa8b":[
"indéterminé"
],
"0dee3ca9-7962-4ee5-bc02-89d80e315072":[
"saas"
],
"4e44268d-0231-4d1d-8e6d-13580dc89b31":[
"indéterminé"
],
"0fd60a21-0bb4-4e3f-a203-8cef11ad3149":[
"indéterminé"
],
"7c4b6cb3-1918-4ec5-85dc-add60186d29b":[
"indéterminé"
],
"48657111-f80f-4bd1-842d-0ce2c4a044b6":[
"indéterminé"
],
"6061a25d-3aaf-d58f-81a4-ad38878d2952":[
"indéterminé"
]
},
"history":{
},
"id":"066a3fce-79e3-4a27-88c1-5217f2cd33f5",
"label":"Id system",
"cartoVersion":"3.15.10",
"code":"",
"description":"1 screen",
"teamleader":"claude.dibus#abc.com",
"businesses":[
"8cb8cb38-084f-4962-bd97-dc80b4d20880"
]
}
]
Edit 2 : Expected.
For outpout i don't find what is possible to do but if i can transform to a valid json structure will help for exemple :
{
"meta":[
{
"id":"482f75dd-a8e4-4f5c-80f0-85b5a7eeb340",
"value":[
"indéterminé"
]
},
{
"id":"d8cac586-2d64-4fe6-bdf4-91a5bac3541e",
"value":[
"Id system"
]
}
]
}
Thanks for help

In a tLibraryLoad component load the json-simple jar
And import needed librairy (in advanced setting).
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import org.json.simple.JSONObject;
import org.json.simple.parser.JSONParser;
In a tJavaFlex component you can work on your data input (called onlyMet bellow)
JSONObject job = new JSONObject();
JSONParser parser = new JSONParser();
try {
//-- Parsing et prepare result
List<JSONObject> resultList = new ArrayList<>();
JSONObject jsonObjects = (JSONObject) parser.parse((String)onlyMet.metas);
jsonObjects.keySet().parallelStream().forEach(entry -> {
JSONObject rjo = new JSONObject();
rjo.put("id", entry);
rjo.put("value", jsonObjects.get(entry));
// System.out.printf("- meta %s : %s \n", entry, jsonObjects.get(entry)); // for debug if needed
resultList.add(rjo);
});
// Outpout the result
onlyOut.metas = resultList.toString();
} catch (final Exception e){
e.printStackTrace();
}

Related

Watson Speech To Text, Word timestamps are not in sync with Audio

I am using speech to text with following params -
timestamps=true&max_alternatives=1&model=en-US_NarrowbandModel&smart_formatting=true',
Headers - 'Content-Type' => 'audio/flac', 'Transfer-Encoding' => 'chunked'
And providing an audio/flac file to process, but returned words time boundary is not in sync with audio.
For eg. Response is -:
take a morning I have 2 questions please %HESITATION first how how much of the ability
Timestamps are like these -
[
[
"take",
1409.48,
1409.62
],
[
"a",
1409.62,
1409.67
],
[
"morning",
1409.67,
1410.03
],
[
"I",
1410.06,
1410.17
],
[
"have",
1410.17,
1410.38
],
[
"two",
1410.41,
1410.58
],
[
"questions",
1410.58,
1411.05
],
[
"please",
1411.05,
1411.42
],
[
"%HESITATION",
1411.42,
1411.65
],
[
"first",
1411.65,
1412.17
],
[
"how",
1412.33,
1412.62
],
[
"how",
1412.65,
1412.77
],
[
"much",
1412.77,
1413
],
[
"of",
1413,
1413.1
],
[
"the",
1413.1,
1413.37
],
[
"ability",
1413.37,
1413.82
]
]
But in actual audio these words are at different times. (few seconds difference)
Any Suggestion ??

Highcharts two datetime xAxis chart with same ticks interval

I two days think about problem, but i don't know make chart with two normalized datetime xAxis.
I Tried use linkedTo and normalize data arrays (but this hard overkill).
linkedTo showing only overlapped data.
Without linkedTo and normalization arrays, ticks showed as async like this
Help me, please
What make chart like this?
$('#container').highcharts({
yAxis: {
gridLineWidth:0
},
xAxis: [
{
gridLineWidth: 1,
type: 'datetime',
lineColor: '#ff9c00'
},
{
type: 'datetime',
opposite: true,
lineColor: '#FF6B6B'
}
],
series: [
{
"name": "first",
"data": [
[
1479164400000,
7323
],
[
1479160800000,
6204
],
[
1479157200000,
3561
],
[
1479153600000,
9706
],
[
1479150000000,
2539
],
[
1479146400000,
4570
],
[
1479142800000,
4187
],
[
1479139200000,
3631
],
[
1479135600000,
7512
],
[
1479132000000,
2456
],
[
1479128400000,
6983
],
[
1479124800000,
3511
],
[
1479121200000,
2765
],
[
1479117600000,
3401
],
[
1479114000000,
2565
],
[
1479110400000,
4425
],
[
1479106800000,
4592
],
[
1479103200000,
4328
],
[
1479099600000,
2694
],
[
1479096000000,
2787
],
[
1479092400000,
11633
],
[
1479088800000,
3311
],
[
1479085200000,
2839
],
[
1479081600000,
12620
]
]
},
{
"name": "second",
"data": [
[
1479250800000,
22730
],
[
1479247200000,
10695
],
[
1479243600000,
12017
],
[
1479240000000,
12110
],
[
1479236400000,
9689
],
[
1479232800000,
4288
],
[
1479229200000,
3702
],
[
1479225600000,
5575
],
[
1479222000000,
5694
],
[
1479218400000,
3098
],
[
1479214800000,
9885
],
[
1479211200000,
6587
],
[
1479207600000,
3028
],
[
1479204000000,
3281
],
[
1479200400000,
12577
],
[
1479196800000,
3886
],
[
1479193200000,
4014
],
[
1479189600000,
6553
],
[
1479186000000,
2041
],
[
1479182400000,
4056
],
[
1479178800000,
4223
],
[
1479175200000,
4920
],
[
1479171600000,
5432
],
[
1479168000000,
7857
],
[
1479164400000,
7323
],
[
1479160800000,
6204
],
[
1479157200000,
3561
],
[
1479153600000,
9706
],
[
1479150000000,
2539
],
[
1479146400000,
4570
],
[
1479142800000,
4187
],
[
1479139200000,
3631
],
[
1479135600000,
7512
],
[
1479132000000,
2456
],
[
1479128400000,
6983
],
[
1479124800000,
3511
],
[
1479121200000,
2765
],
[
1479117600000,
3401
],
[
1479114000000,
2565
],
[
1479110400000,
4425
],
[
1479106800000,
4592
],
[
1479103200000,
4328
],
[
1479099600000,
2694
],
[
1479096000000,
2787
],
[
1479092400000,
11633
],
[
1479088800000,
3311
],
[
1479085200000,
2839
],
[
1479081600000,
12620
]
],
"xAxis": 1,
"dashStyle": "shortdot"
}
]
});
#container {
min-width: 1024px;
max-width: 1024px;
height: 300px;
margin: 1em auto;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="http://code.highcharts.com/highcharts.js"></script>
<script src="http://code.highcharts.com/modules/exporting.js"></script>
<div id="container"></div>
My solution is as follows:
Grab the bottom axis ticks values
Map the values to pixels
Map the pixels to top axis values
Set the values to top axis ticks
All the calculation can be done in tickPositioner.
tickPositioner: function() {
var axisTop = this,
axisBottom = this.chart.xAxis[0],
ticksBottom = axisBottom.tickPositions;
var ticksTop = ticksBottom.map(function(tickValue) {
return axisTop.toValue(axisBottom.toPixels(tickValue));
});
return ticksTop;
},
example: https://jsfiddle.net/439adgpa/
After setting tick positions in tick positioner, you have to manually set the correct format or use it from the bottom axis ticks.
labels: {
format: '{value:%H:%M}'
}
or
ticksTop.info = ticksBottom.info;
example: https://jsfiddle.net/439adgpa/1/
Use stacked series over each other. yAxis stacked and numbered as 0,1,2.
Here is an example fiddle
{
name : 'First',
data :data for first,
zIndex : 1,
lineWidth:3,
color:'red',
yAxis: 0 ,
marker : {
enabled : false
}
}, {
name : 'second',
data : data for second,
lineWidth:3,
zIndex : 1,
yAxis: 1 ,
color:'#BE6230',
marker : {
enabled : false
}
}

Is there a better way to write this Postgres JSONB query?

Below is a sample JSONB array. I'm trying to figure out how to write a query that doesn't require a cross product like this.
select b.id from brand b,jsonb_array_elements (b.tree) a where a#>>'{Name}' = 'Skiing';
Bonus points for helping me translate this to SQL Alchemy
[
{
"Name": "Snowboarding",
"Order": 1,
"Categories": {
"Jackets": [
22002,
23224
],
"Helmets": [
24920
],
"Freestyle Boards": [
20164
],
"Goggles": [
23169,
23280
],
"Hats": [
22966,
21727
],
"Bindings": [
19265
],
"Gloves": [
20461
],
"Boots": [
26374,
19079,
21765,
22669
],
"Freeride Boards": [
18395,
25505
],
"Pants": [
24143,
20957
]
}
},
{
"Name": "Skiing",
"Order": 2,
"Categories": {
"Jackets": [
22518,
25791,
19972
],
"Pants": [
17516,
23113
],
"Goggles": [
25066,
20996
],
"Helmets": [
24378
],
"Hats": [
20009,
21245
],
"Cross-country Skiing": [
17464
],
"Gloves": [
25822
],
"Boots": [
16616
],
"Poles": [
19280
]
}
},....]
SQL solution first:
SELECT brand.id
FROM brand
WHERE brand.tree #> '[{"Name": "Skiing"}]'::jsonb;
As for sqlalchemy version, you can simply use contains in order to generate SQL statement above:
q = (session.query(Brand.id)
.filter(Brand.tree.contains([{"Name": "Skiing"}]))
)

MongoDB: addToSet not working when adding subdocuments to an array

I am new to MongoDB and I am stuck trying to get unique subdocuments in an array.
A document in my collection looks like this:
{
"PubDate": "1/01/01 00:00",
"Title": "Identification of DNA-Dependent Protein Kinase Catalytic Subunit (DNA-PKcs) as a Novel Target of Bisphenol A",
"Datums": [
{
"evidence_id": "3515620_6",
"evidence": [
"\n\nTo examine the interaction between DNA-PKcs and Ku70/Ku80 more directly, we performed immunoprecipitation (IP) using FLAG-Ku70 or FLAG-Ku80 recombinants, which were expressed in 293T cells after IR-irradiation (Fig. 4B\n ) or UV-irradiation (Fig. 4C\n ). After IR-irradiation, co-precipitation of DNA-PKcs with Ku80 increased compared with that in the non-irradiated controls (Fig. 4B\n lanes 7 and 8)."
],
"map": {
"change": [
{
"Text": "increased"
}
],
"subject": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"treatment": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"assay": [
{
"Text": "copptby"
}
]
}
},
{
"evidence_id": "3515620_6",
"evidence": [
"\n\nTo examine the interaction between DNA-PKcs and Ku70/Ku80 more directly, we performed immunoprecipitation (IP) using FLAG-Ku70 or FLAG-Ku80 recombinants, which were expressed in 293T cells after IR-irradiation (Fig. 4B\n ) or UV-irradiation (Fig. 4C\n ). After IR-irradiation, co-precipitation of DNA-PKcs with Ku80 increased compared with that in the non-irradiated controls (Fig. 4B\n lanes 7 and 8)."
],
"map": {
"change": [
{
"Text": "increased"
}
],
"subject": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"treatment": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"assay": [
{
"Text": "copptby"
}
]
}
},
{
"evidence_id": "3515620_6",
"evidence": [
"\n\nTo examine the interaction between DNA-PKcs and Ku70/Ku80 more directly, we performed immunoprecipitation (IP) using FLAG-Ku70 or FLAG-Ku80 recombinants, which were expressed in 293T cells after IR-irradiation (Fig. 4B\n ) or UV-irradiation (Fig. 4C\n ). After IR-irradiation, co-precipitation of DNA-PKcs with Ku80 increased compared with that in the non-irradiated controls (Fig. 4B\n lanes 7 and 8)."
],
"map": {
"change": [
{
"Text": "increased"
}
],
"subject": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"treatment": [
{
"Entity": {
"strings": [
"dna-pkcs"
],
"uniprotSym": "P78527"
}
}
],
"assay": [
{
"Text": "copptby"
}
]
}
}
],
"Volume": "7",
"FullJournalName": "PLoS ONE",
"Authors": "Ito Y, Ito T, Karasawa S, Enomoto T, Nashimoto A, Hase Y, Sakamoto S, Mimori T, Matsumoto Y, Yamaguchi Y, Handa H",
"Issue": "12",
"Pages": "e50481",
"PMCID": "3515620"
}
In the above example, the "Datums" field has only one subdocument, but usually, the "Datums" field will have around 20-30 subdocuments. I want my MongoDB query to output documents (that satisfy certain criteria), where the "Datums" field will have unique subdocuments in its array. To do that I am using the following MongoDB query:
db.My_Datums.aggregate(
[
{ "$match": {
"Datums":
{
"$elemMatch":
{
"map.treatment.Entity.uniprotSym": { "$in": ["P33981", "P78527"] },
"map.assay.Text": "copptby"
}
}
}},
{ "$project": { "PMCID":1, "Title":1, "PubDate":1, "Volume":1, "Issue":1, "Pages":1, "FullJournalName":1, "Authors":1, "Datums.map.assay.Text":1, "Datums.map.change.Text":1, "Datums.map.subject.Entity.strings":1, "Datums.map.treatment.Entity.uniprotSym":1, "Datums.evidence_id":1, "_id":0 }},
{ "$unwind": "$Datums" },
{ "$match": { "Datums.map.treatment.Entity.uniprotSym": { "$in": ["P33981", "P78527"] }, "Datums.map.assay.Text": "copptby" }},
{ "$group": { "_id": "$PMCID", "Datums": { "$addToSet": "$Datums" }}}
]
#{ allowDiskUse: 1 }
)
But on running the above command, I am getting the below output:
{u'Datums': [{u'evidence_id': u'3515620_6',
u'map': {u'assay': [{u'Text': u'copptby'}],
u'change': [{u'Text': u'increased'}],
u'subject': [{u'Entity': {u'strings': u'dna-pkcs'}}],
u'treatment': [{u'Entity': {u'uniprotSym': u'P78527'}}]}},
{u'evidence_id': u'3515620_6',
u'map': {u'assay': [{u'Text': u'copptby'}],
u'change': [{u'Text': u'increased'}],
u'subject': [{u'Entity': {u'strings': u'dna-pkcs'}}],
u'treatment': [{u'Entity': {u'uniprotSym': u'P78527'}}]}},
{u'evidence_id': u'3515620_6',
u'map': {u'assay': [{u'Text': u'copptby'}],
u'change': [{u'Text': u'increased'}],
u'subject': [{u'Entity': {u'strings': u'dna-pkcs'}}],
u'treatment': [{u'Entity': {u'uniprotSym': u'P78527'}}]}}],
u'_id': u'3515620'}
What I am not understanding is why addToSet adding duplicate subdocuments to "Datums". Is there any way I can filter out the duplicates? What am I doing wrong in my query? I have searched a lot and read up a lot, but couldnt find any solution. Any MongoDB guru out there who could help this noob?? I will be eternally grateful to you!
Thanks in advance!

MongoDB version 2.6 still fails on 2dsphere geoindexing citing inability to extract geokeys and possible malformed geometry

I had been getting the referenced error under version 2.4 with a different lon/lat data set. I upgraded to 2.6 and was able to use 2dsphere index on that data set. However, with a different data set (excerpt below) I received the old error message on trying to index using 2dshpere. the error message excerpt is as follows:
I am having the same problem receiving the error: can't extract geokeys from object; malformed geometry?. The object is the following, which validated on on GeoJSONLint.com. Also, I am using mongodb version 2.6.1. This happened to me recently when I was using version 2.4, and I upgraded to 2.6, and that solved the problem then, but I am getting the error again with a new geojson data set. Please, if anyone can let me know how to solve that would be very helpful.
{"type":"Feature",
"geometry": {
"type":"Polygon",
"coordinates": [[[-80.341942, 25.935059],[-80.341995, 25.936712],[-80.342026, 25.937851],[-80.342026, 25.938393],[-80.34211, 25.939489],[-80.342178, 25.942131],[-80.342308, 25.942329],[-80.342339, 25.94285],[-80.342438, 25.944914],[-80.342445, 25.945118],[-80.342514, 25.947062],[-80.342537, 25.947479],[-80.342613, 25.949152],[-80.342667, 25.950306],[-80.342712, 25.951366],[-80.342773, 25.952713],[-80.342857, 25.954611],[-80.341438, 25.955782],[-80.340202, 25.956839],[-80.339104, 25.957079],[-80.33831, 25.957081],[-80.336006, 25.957088],[-80.333244, 25.957096],[-80.331787, 25.957102],[-80.326721, 25.957117],[-80.326714, 25.957117],[-80.32444, 25.957127],[-80.323616, 25.957129],[-80.322365, 25.957132],[-80.318619, 25.957146],[-80.317375, 25.95715],[-80.316597, 25.957151],[-80.314278, 25.957161],[-80.313507, 25.957163],[-80.3134, 25.957163],[-80.31308, 25.957165],[-80.312973, 25.957167],[-80.312798, 25.957167],[-80.312263, 25.957167],[-80.312088, 25.957167],[-80.311996, 25.957167],[-80.311737, 25.957167],[-80.311646, 25.957169],[-80.311531, 25.957169],[-80.311272, 25.95717],[-80.311188, 25.95717],[-80.311073, 25.95717],[-80.311028, 25.95717],[-80.310913, 25.95717],[-80.310883, 25.95717],[-80.310837, 25.95717],[-80.310829, 25.95717],[-80.310783, 25.95717],[-80.310776, 25.95717],[-80.310738, 25.957169],[-80.310684, 25.957167],[-80.310646, 25.957165],[-80.310616, 25.957165],[-80.310547, 25.957163],[-80.310356, 25.957161],[-80.310287, 25.957159],[-80.310249, 25.957157],[-80.310143, 25.957155],[-80.310104, 25.957155],[-80.302231, 25.95697],[-80.301704, 25.956972],[-80.298981, 25.956896],[-80.301468, 25.956968],[-80.298668, 25.956886],[-80.29528, 25.956772],[-80.295052, 25.95677],[-80.294975, 25.95677],[-80.294975, 25.956905],[-80.294975, 25.956928],[-80.294762, 25.956938],[-80.29454, 25.956947],[-80.294548, 25.956835],[-80.294579, 25.954868],[-80.294579, 25.954702],[-80.294601, 25.953625],[-80.294601, 25.953461],[-80.294571, 25.953079],[-80.294502, 25.95186],[-80.294487, 25.95163],[-80.294472, 25.951212],[-80.294418, 25.949945],[-80.294296, 25.946936],[-80.294281, 25.946581],[-80.294197, 25.944429],[-80.294128, 25.943003],[-80.294098, 25.942413],[-80.29409, 25.94228],[-80.293854, 25.939285],[-80.293846, 25.939144],[-80.293694, 25.936041],[-80.293625, 25.934875],[-80.293549, 25.933006],[-80.293541, 25.932341],[-80.293533, 25.931826],[-80.293472, 25.931252],[-80.293495, 25.93054],[-80.293373, 25.928652],[-80.293343, 25.928129],[-80.293335, 25.927803],[-80.29332, 25.92738],[-80.293228, 25.925432],[-80.295456, 25.925182],[-80.296936, 25.925102],[-80.297676, 25.925114],[-80.297974, 25.925117],[-80.298866, 25.925131],[-80.30027, 25.925121],[-80.300957, 25.925116],[-80.305336, 25.925138],[-80.306831, 25.925156],[-80.307472, 25.925163],[-80.307686, 25.925158],[-80.308235, 25.92514],[-80.30941, 25.925127],[-80.309402, 25.924763],[-80.313126, 25.924776],[-80.318581, 25.924747],[-80.319275, 25.924746],[-80.319557, 25.924723],[-80.320023, 25.924686],[-80.320137, 25.924664],[-80.320526, 25.924587],[-80.320992, 25.924469],[-80.321259, 25.9244],[-80.321419, 25.924341],[-80.321869, 25.92417],[-80.322388, 25.923891],[-80.32254, 25.923805],[-80.322746, 25.923674],[-80.323059, 25.923479],[-80.323067, 25.923475],[-80.323181, 25.923376],[-80.323235, 25.92333],[-80.32373, 25.922907],[-80.324226, 25.922354],[-80.324783, 25.921375],[-80.324966, 25.920954],[-80.32531, 25.921684],[-80.325325, 25.922615],[-80.32534, 25.923359],[-80.32534, 25.923492],[-80.325348, 25.923607],[-80.325356, 25.923817],[-80.325371, 25.924597],[-80.325401, 25.925245],[-80.325432, 25.925819],[-80.325439, 25.92606],[-80.325462, 25.926447],[-80.325447, 25.926756],[-80.325409, 25.927439],[-80.325562, 25.92762],[-80.325706, 25.927801],[-80.325928, 25.92808],[-80.326218, 25.927971],[-80.326492, 25.927883],[-80.327003, 25.927832],[-80.33033, 25.927818],[-80.333618, 25.927853],[-80.334419, 25.927872],[-80.336128, 25.927896],[-80.336777, 25.927883],[-80.338219, 25.927912],[-80.339294, 25.927912],[-80.33963, 25.927931],[-80.340416, 25.927938],[-80.340919, 25.927923],[-80.341103, 25.927925],[-80.341606, 25.927925],[-80.34166, 25.929146],[-80.341721, 25.930458],[-80.341728, 25.930592],[-80.341766, 25.931606],[-80.341858, 25.933409],[-80.341942, 25.935059]
]]},
"properties":{
"id": "ZIP33015","census_year": 2010,"exposure_year": 2014, "commercial_exposure": 652653581, "residential_exposure": 2299303204, "mobile_home": 0, "tenants": 8631577, "condo_owners": 49223035, "total_exposure": 3009811397}}
When I run db.exposureByZip.ensureIndex({"geometry" : "2dsphere"}) on the above, I get the following error:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"ok" : 0,
"errmsg" : "Can't extract geo keys from object, malformed geometry?: { _id: ObjectId('53a6be43beeb0a0246ae75bb'), geometry: { type: \"Polygon\", coordinates: [ [ [ -80.341942, 25.935059 ], [ -80.341995, 25.936712 ], [ -80.342026, 25.937851 ], [ -80.342026, 25.938393 ], [ -80.34211000000001, 25.939489 ], [ -80.342178, 25.942131 ], [ -80.342308, 25.942329 ], [ -80.342339, 25.94285 ], [ -80.342438, 25.944914 ], [ -80.342445, 25.945118 ], [ -80.34251399999999, 25.947062 ], [ -80.34253699999999, 25.947479 ], [ -80.342613, 25.949152 ], [ -80.34266700000001, 25.950306 ], [ -80.34271200000001, 25.951366 ], [ -80.34277299999999, 25.952713 ], [ -80.342857, 25.954611 ], [ -80.341438, 25.955782 ], [ -80.34020200000001, 25.956839 ], [ -80.33910400000001, 25.957079 ], [ -80.33831000000001, 25.957081 ], [ -80.336006, 25.957088 ], [ -80.33324399999999, 25.957096 ], [ -80.33178700000001, 25.957102 ], [ -80.32672100000001, 25.957117 ], [ -80.326714, 25.957117 ], [ -80.32444, 25.957127 ], [ -80.323616, 25.957129 ], [ -80.322365, 25.957132 ], [ -80.318619, 25.957146 ], [ -80.317375, 25.95715 ], [ -80.316597, 25.957151 ], [ -80.314278, 25.957161 ], [ -80.313507, 25.957163 ], [ -80.3134, 25.957163 ], [ -80.31308, 25.957165 ], [ -80.312973, 25.957167 ], [ -80.312798, 25.957167 ], [ -80.312263, 25.957167 ], [ -80.312088, 25.957167 ], [ -80.31199599999999, 25.957167 ], [ -80.31173699999999, 25.957167 ], [ -80.311646, 25.957169 ], [ -80.311531, 25.957169 ], [ -80.311272, 25.95717 ], [ -80.311188, 25.95717 ], [ -80.31107299999999, 25.95717 ], [ -80.31102799999999, 25.95717 ], [ -80.310913, 25.95717 ], [ -80.310883, 25.95717 ], [ -80.31083700000001, 25.95717 ], [ -80.310829, 25.95717 ], [ -80.310783, 25.95717 ], [ -80.310776, 25.95717 ], [ -80.310738, 25.957169 ], [ -80.31068399999999, 25.957167 ], [ -80.31064600000001, 25.957165 ], [ -80.310616, 25.957165 ], [ -80.310547, 25.957163 ], [ -80.310356, 25.957161 ], [ -80.310287, 25.957159 ], [ -80.310249, 25.957157 ], [ -80.310143, 25.957155 ], [ -80.310104, 25.957155 ], [ -80.30223100000001, 25.95697 ], [ -80.301704, 25.956972 ], [ -80.298981, 25.956896 ], [ -80.301468, 25.956968 ], [ -80.29866800000001, 25.956886 ], [ -80.29528000000001, 25.956772 ], [ -80.295052, 25.95677 ], [ -80.29497499999999, 25.95677 ], [ -80.29497499999999, 25.956905 ], [ -80.29497499999999, 25.956928 ], [ -80.29476200000001, 25.956938 ], [ -80.29454, 25.956947 ], [ -80.29454800000001, 25.956835 ], [ -80.294579, 25.954868 ], [ -80.294579, 25.954702 ], [ -80.294601, 25.953625 ], [ -80.294601, 25.953461 ], [ -80.294571, 25.953079 ], [ -80.29450199999999, 25.95186 ], [ -80.294487, 25.95163 ], [ -80.294472, 25.951212 ], [ -80.29441799999999, 25.949945 ], [ -80.294296, 25.946936 ], [ -80.294281, 25.946581 ], [ -80.294197, 25.944429 ], [ -80.294128, 25.943003 ], [ -80.29409800000001, 25.942413 ], [ -80.29409, 25.94228 ], [ -80.293854, 25.939285 ], [ -80.293846, 25.939144 ], [ -80.293694, 25.936041 ], [ -80.29362500000001, 25.934875 ], [ -80.293549, 25.933006 ], [ -80.293541, 25.932341 ], [ -80.293533, 25.931826 ], [ -80.29347199999999, 25.931252 ], [ -80.29349499999999, 25.93054 ], [ -80.293373, 25.928652 ], [ -80.29334299999999, 25.928129 ], [ -80.293335, 25.927803 ], [ -80.29331999999999, 25.92738 ], [ -80.293228, 25.925432 ], [ -80.295456, 25.925182 ], [ -80.296936, 25.925102 ], [ -80.297676, 25.925114 ], [ -80.297974, 25.925117 ], [ -80.298866, 25.925131 ], [ -80.30027, 25.925121 ], [ -80.300957, 25.925116 ], [ -80.305336, 25.925138 ], [ -80.306831, 25.925156 ], [ -80.307472, 25.925163 ], [ -80.307686, 25.925158 ], [ -80.308235, 25.92514 ], [ -80.30941, 25.925127 ], [ -80.30940200000001, 25.924763 ], [ -80.313126, 25.924776 ], [ -80.31858099999999, 25.924747 ], [ -80.319275, 25.924746 ], [ -80.319557, 25.924723 ], [ -80.32002300000001, 25.924686 ], [ -80.320137, 25.924664 ], [ -80.320526, 25.924587 ], [ -80.320992, 25.924469 ], [ -80.321259, 25.9244 ], [ -80.32141900000001, 25.924341 ], [ -80.32186900000001, 25.92417 ], [ -80.322388, 25.923891 ], [ -80.32254, 25.923805 ], [ -80.322746, 25.923674 ], [ -80.323059, 25.923479 ], [ -80.32306699999999, 25.923475 ], [ -80.32318100000001, 25.923376 ], [ -80.323235, 25.92333 ], [ -80.32373, 25.922907 ], [ -80.324226, 25.922354 ], [ -80.324783, 25.921375 ], [ -80.324966, 25.920954 ], [ -80.32531, 25.921684 ], [ -80.32532500000001, 25.922615 ], [ -80.32534, 25.923359 ], [ -80.32534, 25.923492 ], [ -80.32534800000001, 25.923607 ], [ -80.325356, 25.923817 ], [ -80.325371, 25.924597 ], [ -80.325401, 25.925245 ], [ -80.32543200000001, 25.925819 ], [ -80.325439, 25.92606 ], [ -80.325462, 25.926447 ], [ -80.325447, 25.926756 ], [ -80.32540899999999, 25.927439 ], [ -80.32556200000001, 25.92762 ], [ -80.325706, 25.927801 ], [ -80.325928, 25.92808 ], [ -80.326218, 25.927971 ], [ -80.326492, 25.927883 ], [ -80.327003, 25.927832 ], [ -80.33033, 25.927818 ], [ -80.333618, 25.927853 ], [ -80.334419, 25.927872 ], [ -80.336128, 25.927896 ], [ -80.336777, 25.927883 ], [ -80.338219, 25.927912 ], [ -80.339294, 25.927912 ], [ -80.33963, 25.927931 ], [ -80.340416, 25.927938 ], [ -80.340919, 25.927923 ], [ -80.341103, 25.927925 ], [ -80.341606, 25.927925 ], [ -80.34166, 25.929146 ], [ -80.34172100000001, 25.930458 ], [ -80.341728, 25.930592 ], [ -80.34176600000001, 25.931606 ], [ -80.341858, 25.933409 ], [ -80.341942, 25.935059 ] ] ] }, type: \"Feature\", properties: { commercial_exposure: 652653581, total_exposure: 3009811397, condo_owners: 49223035, mobile_home: 0, census_year: 2010, tenants: 8631577, residential_exposure: 2299303204, exposure_year: 2014, id: \"ZIP33015\" } }",
"code" : 16755
}
There is a bug posted for this problem: https://jira.mongodb.org/browse/SERVER-13735
“Malformed geometry” error when creating a geosphere index for a valid
MultiPolygon
Similar issue, a valid GeoJSON is being submitted for a 2dsphere index and is causing the "Can't extract geo keys from object, malformed geometry?" error.
There have been a series of previous bugs where valid GeoJSON was causing this error message. They've fixed those in 2.6.x but this appears to be a new one. I'd chime in on JIRA as it will help get it prioritized for a fix.
Maybe the problem is because your polygon coordinates has the first coordinate and last coordinate as the same value. Try to remove the last coordinate pooint from your polygons.