How do I store raw a GeoJson MultiPolygon into MongoDB using Spring Data - spring-data

How do I easily convert a raw GeoJson MultiPolygon Feature to org.springframework.data.mongodb.core.geo.GeoJsonMultiPolygon so I can save it into MongoDB using SpringData(mongoTemplate)? I need to keep the holes, of a multipolygon....
GeoJson
https://gist.github.com/boundaries-io/978eaa4a10df9467638a5eb9259c84e6
org.geojson.MultiPolygon to org.springframework.data.mongodb.core.geo.GeoJsonMultiPolygon
Currently the below works and I can save this MultiPolygon to MongoDb.
org.geojson.MultiPolygon multiPolygon ....
ObjectMapper mapper = new ObjectMapper();
String writer = new StringWriter();
org.geojson.Feature feature = new org.geojson.Feature();
feature.setGeometry(multiPolygon);
mapper.writeValue(writer, feature);
String geoJson = writer.getBuffer().toString();
Document document = Document.parse( g );
Object obj = document.get("geometry");
Place place = new Place();
place.setMultiPolygon(obj);
this allows me to do GeoSpatial searching on the multipolygon that contained holes,etc. i feel this isn't the cleanest way to do this..

the solution i ended up going with is below..there is no way to clearly createa GeoJsonMultiPolygon from a org.geojson.MultiPolygon.
org.geojson.MultiPolygon multiPolygon = //a complicated MultiPolygon with holes,etc.
ObjectMapper mapper = new ObjectMapper();
String writer = new StringWriter();
org.geojson.Feature feature = new org.geojson.Feature();
feature.setGeometry(multiPolygon);
mapper.writeValue(writer, feature);
String geoJson = writer.getBuffer().toString();
Document document = Document.parse( g );
Object obj = document.get("geometry");
Place place = new Place();
place.setMultiPolygon(obj);

Related

iText7: com.itextpdf.kernel.PdfException: Dictionary doesn't have supported font data

I try to generate a toc(table of content) for my pdf, and I want to get some strings which look like chapter title in xxx.pdf using ITextExtractionStrategy. But I got com.itextpdf.kernel.PdfException when I am running a test.
Here is my code:
#org.junit.Test
public void test() throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PdfDocument pdfDoc = new PdfDocument(new PdfReader("src/test/resources/template/xxx.pdf"),
new PdfWriter(baos));
pdfDoc.addNewPage(1);
Document document = new Document(pdfDoc);
// when add this code, throw com.itextpdf.kernel.PdfException: Dictionary doesn't have supported font data.
Paragraph title = new Paragraph(new Text("index"))
.setTextAlignment(TextAlignment.CENTER);
document.add(title);
SimpleTextExtractionStrategy extractionStrategy = new SimpleTextExtractionStrategy();
for (int i = 1; i < pdfDoc.getNumberOfPages(); i++) {
PdfPage page = pdfDoc.getPage(i);
PdfCanvasProcessor parser = new PdfCanvasProcessor(extractionStrategy);
parser.processPageContent(page);
}
...
document.close();
pdfDoc.close();
new FileOutputStream("./yyy.pdf").write(baos.toByteArray());
}
Here is the output:
com.itextpdf.kernel.PdfException: Dictionary doesn't have supported font data.
at com.itextpdf.kernel.font.PdfFontFactory.createFont(PdfFontFactory.java:123)
at com.itextpdf.kernel.pdf.canvas.parser.PdfCanvasProcessor.getFont(PdfCanvasProcessor.java:490)
at com.itextpdf.kernel.pdf.canvas.parser.PdfCanvasProcessor$SetTextFontOperator.invoke(PdfCanvasProcessor.java:811)
at com.itextpdf.kernel.pdf.canvas.parser.PdfCanvasProcessor.invokeOperator(PdfCanvasProcessor.java:454)
at com.itextpdf.kernel.pdf.canvas.parser.PdfCanvasProcessor.processContent(PdfCanvasProcessor.java:282)
at com.itextpdf.kernel.pdf.canvas.parser.PdfCanvasProcessor.processPageContent(PdfCanvasProcessor.java:303)
at com.example.pdf.util.Test.test(Test.java:138)
Whenever you add content to a PdfDocument like you do here
Document document = new Document(pdfDoc);
Paragraph title = new Paragraph(new Text("index"))
.setTextAlignment(TextAlignment.CENTER);
document.add(title);
you have to be aware that this content is not already stored in its final form; for example fonts used are not yet properly subset'ed. The final form is generated when you're closing the document.
Text extraction on the other hand requires the content to extract to be in its final form.
Thus, you should not apply text extraction to a document you're working on. In particular, don't apply text extraction to a page you've changed the content of.
If you need to extract text from the documents you create yourself, close your document first, open a new document from the output, and extract from that new document.

Kafka streams: adding dynamic fields at runtime to avro record

I want to implement a configurable Kafka stream which reads a row of data and applies a list of transforms. Like applying functions to the fields of the record, renaming fields etc. The stream should be completely configurable so I can specify which transforms should be applied to which field. I'm using Avro to encode the Data as GenericRecords. My problem is that I also need transforms which create new columns. Instead of overwriting the previous value of the field they should append a new field to the record. This means the schema of the record changes. The solution I came up with so far is iterating over the list of transforms first to figure out which fields I need to add to the schema. I then create a new schema with the old fields and new fields combined
The list of transforms(There is always a source field which gets passed to the transform method and the result is then written back to the targetField):
val transforms: List[Transform] = List(
FieldTransform(field = "referrer", targetField = "referrer", method = "mask"),
FieldTransform(field = "name", targetField = "name_clean", method = "replaceUmlauts")
)
case class FieldTransform(field: String, targetField: String, method: String)
method to create the new schema, based on the old schema and the list of transforms
def getExtendedSchema(schema: Schema, transforms: List[Transform]): Schema = {
var newSchema = SchemaBuilder
.builder(schema.getNamespace)
.record(schema.getName)
.fields()
// create new schema with existing fields from schemas and new fields which are created through transforms
val fields = schema.getFields ++ getNewFields(schema, transforms)
fields
.foldLeft(newSchema)((newSchema, field: Schema.Field) => {
newSchema
.name(field.name)
.`type`(field.schema())
.noDefault()
// TODO: find way to differentiate between explicitly set null defaults and fields which have no default
//.withDefault(field.defaultValue())
})
newSchema.endRecord()
}
def getNewFields(schema: Schema, transforms: List[Transform]): List[Schema.Field] = {
transforms
.filter { // only select targetFields which are not in schema
case FieldTransform(field, targetField, method) => schema.getField(targetField) == null
case _ => false
}
.distinct
.map { // create new Field object for each targetField
case FieldTransform(field, targetField, method) =>
val sourceField = schema.getField(field)
new Schema.Field(targetField, sourceField.schema(), sourceField.doc(), sourceField.defaultValue())
}
}
Instantiating a new GenericRecord based on an old record
val extendedSchema = getExtendedSchema(row.getSchema, transforms)
val extendedRow = new GenericData.Record(extendedSchema)
for (field <- row.getSchema.getFields) {
extendedRow.put(field.name, row.get(field.name))
}
I tried to look for other solutions but couldn't find any example which had changing data types. It feels to me like there must be a simpler cleaner solution to handle changing Avro schemas at runtime. Any ideas are appreciated.
Thanks,
Paul
I have implemented Passing Dynamic values to your avro schema and validating union to in schema
Example :-
RestTemplate template = new RestTemplate();
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<String> entity = new HttpEntity<String>(headers);
ResponseEntity<String> response = template.exchange(""+registryUrl+"/subjects/"+topic+"/versions/"+version+"", HttpMethod.GET, entity, String.class);
String responseData = response.getBody();
JSONObject jsonObject = new JSONObject(responseData); // add your json string which you will pass from postman
JSONObject jsonObjectResult = new JSONObject(jsonResult);
String getData = jsonObject.get("schema").toString();
Schema.Parser parser = new Schema.Parser();
Schema schema = parser.parse(getData);
GenericRecord genericRecord = new GenericData.Record(schema);
schema.getFields().stream().forEach(field->{
genericRecord.put(field.name(),jsonObjectResult.get(field.name()));
});
GenericDatumReader<GenericRecord>reader = new GenericDatumReader<GenericRecord>(schema);
boolean data = reader.getData().validate(schema,genericRecord );

Cascade save on OrientDB Document API when using embedded types

Given the following test:
// setup
OClass driver = getDatabase().getMetadata().getSchema().createClass(DRIVER);
OClass car = getDatabase().getMetadata().getSchema().createClass(CAR);
car.createProperty(DRIVERS, OType.EMBEDDEDLIST, driver);
OClass team = getDatabase().getMetadata().getSchema().createClass(TEAM);
team.createProperty(CARS, OType.EMBEDDEDSET, car);
// exercise
ODocument alonso = new ODocument(DRIVER).field("name", "Fernando Alonso").field("nationality", "Spanish")
.field("yearOfBirth", 1981);
ODocument button = new ODocument(DRIVER).field("name", "Jenson Button").field("nationality", "british")
.field("yearOfBirth", 1980);
ODocument mp30 = new ODocument(CAR).field(DRIVERS, Arrays.asList(new ODocument[] { alonso, button }));
Set<ODocument> cars = new HashSet<>();
cars.add(mp30);
ODocument mclarenF1Team = new ODocument(TEAM).field(CARS, cars);
mclarenF1Team.save();
// verify
assertEquals(1, getDatabase().countClass(TEAM));
assertEquals(1, getDatabase().countClass(CAR));
assertEquals(2, getDatabase().countClass(DRIVER));
The second assertion fails:
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at foo.orientdb.dataaccessapi.StoreJSonIT.testSchemaFull(StoreJSonIT.java:68)
Why does it fail?
The properties CAR and DRIVER are created as embedded list and embedded set, shouldn't a single save in mclarenF1Team do a cascade save for the embedded documents?
Embedded List/Set means that the documents you create will be embedded (Saved) in the parent document and not in his own class/cluster.
IF you want to achieve that behaviour you should use links
See here
http://orientdb.com/docs/2.1/Concepts.html#relationships

salesforce mobile sdk create related object

I have Sample_Product__c custom object, and wanted to create object with relational field's external id:
final Map<String, Object> prod = new HashMap<String, Object>();
prod.put("External_ID__c", obj.getString("medicineName"));
final Map < String, Object > sample = new HashMap < String, Object > ();
sample.put("Product__r", prod);
sample.put("LotNumber__c", obj.getString("medicineSerialNo"));
sample.put("GivenDate__c", format.format(givenDate));
sample.put("ExpiredDate__c", format.format(expireDate));
sample.put("GivenNumber__c", obj.getString("medicineQuantity"));
sample.put("Call__c", callid);
RestRequest.getRequestForCreate(getString(R.string.api_version), "Sample_Product__c", sample);
This gives error as, Product__r should be SObject...
In PHP, I was doing this like so:
$sfobject = new stdclass();
$sfobject->External_ID__c = "...";
$sample->Product__r = $sfobject;
But have no idea how to achieve this with salesforce mobile SDK... hope someone knows, thanks!

Are there any other way to fill a Datatable in ADO.Net without using dataadaptor.Fill method?

Are there any other fast way to fill a Data table in ADO.Net without using Data adaptor.Fill method?
Yes, you can. Here a short example:
var results = new DataTable();
using(var connection = new SqlConnection(...))
using(var command = connection.CreateCommand())
{
command.Text = "sql statement";
var parameter = command.CreateParameter();
parameter.Name = "name";
parameter.Value = aValue;
command.Parameters.Add(parameter);
connection.Open();
results.Load(command.ExecuteReader());
}
return results;
If you just need to create a data table, e.g. for storing data not comming from a database, then you can create a new DataTable and populate it you self, like this:
var x = new DataTable("myTable");
x.Columns.Add("Field1", typeof(string));
x.Columns.Add("Field2", typeof(string));
x.Columns.Add("Field3", typeof(int));
x.Rows.Add("fred", "hugo", 1);
x.Rows.Add("fred", "hugo", 2);
x.Rows.Add("fred", "hugo", 3);
You can manually create DataTables and their content using the methods of the various types involved.
This does take a little code but is possible (I've done it from a custom serialisation in .NET 1.1 to populate a data source for a control that needed a DataSet).
[A more specific answer would really require knowing why you are interested in this.]