CreateObject with values from JsonFile - scala

I have some Objects Constructor eg:
AM(power: String, speed: String, Height: String, position: PlayerPosition)
Constructor2(motivation: String, description: String, age: Int)
Then I have a JsonFile that holds data needed for all constructors
Is there a way or some library that allows me to parse the contents of the file in a way that allows me to use it for constructing the objets:
eg:
AM(jsonParser.power, jsonParser.speed,jsonParser.Height, jsonParser.position)
I have multiple JsonFiles and the contents are not always the same structure so I was hoping I could use a parser and have access to the data like key: Value pair.
I am quite new to Scala, I know in ruby there are ways that this can be easily achieved and I was hoping this can be done quite easily
So if my file was a json like:
{
"power": "25"
"speed": "65"
"description": "hello"
}
I would be able to data = jsonParse(jsonFile)
then data.speed would equal "25"

I would introduce an intermediate format, that transforms the JSON to a specific case class and map then to the required format.
Every solution depends a bit on the library you use.
I could add an example for play-json if you use this library.

Related

Saving an array in Mongoose save

Need help with arrays in Mongoose. Here is how my schema looks :-
const alertsSchema = new mongoose.Schema({
Alert_Date: String,
Alert_StartTime: String,
Alert_EndTime: String,
Alert_RuleName: String,
Alert_RuleId: String,
Alert_EntryNumber: String,
Alert_AlertId: String,
Alert_Description: String,
Alert_TriggerTime: String,
Alert_Activities: [{
Alert_Activities_ActivityType: String,
Alert_Activities_ActivityTime: String,
Alert_Activities_AreaName: String,
Alert_Activities_AreaType: String,
Alert_Activities_Position: Array,
Alert_Activities_Duration: Number,
Alert_Activities_SecondVesselId: String,
Alert_Activities_SecondVesselName: String,
Alert_Activities_SecondVesselClassCalc: String,
Alert_Activities_SecondVesselSize: String,
Alert_Activities_SecondVesselMMSI: String,
Alert_Activities_SecondVesselIMO: String,
}],
})
The Alert_Activities is an array coming from my upstream node js application. I implemented a fswatch functionality and as soon as a particular file changes, I am looking to save the record in my collection. The upstream file will always contain an array. Generally on an average of around 4 to 5 records. In short Alert_Activities will be there for every element of the array.
I am running a for loop and then trying to save all four elements in one go into my collection.myObject is the full array read from the upstream file using fs.read
for(i=0; i<myObject.length; i++){
var newAlertData = new alertRegister({
Alert_Date: date1,
Alert_StartTime: startNotificationDate,
Alert_EndTime: endNotificationDate,
Alert_RuleName: myObject[i].ruleName,
Alert_RuleId: myObject[i].ruleId,
Alert_EntryNumber: myObject[i].entryNumber,
Alert_AlertId: myObject[i].alertId,
Alert_Description: myObject[i].description,
Alert_TriggerTime: myObject[i].triggerTime,
// Alert_Activities: myObject[i].activities,
});
newAlertData.save(function(err,data1){
if(err){
console.log(err)
} else {
console.log("data saved")
}
})
The Alert_Activities will obviously not save. What is the right way to do this in Mongoose?
If you are dealing with Mongoose Documents, not with .lean() javascript objects, probably, you'll need to mark array field(s) as modified via markModified and only then use .save().
Also, directModifiedPaths could help you to check, what fields has been modified in the document.
I have noticed that your Alert_Activities, is actually an array of objects, so make sure that the result mongoose documents, which you are trying to .save() really satisfy all the validation rules.
If changes on some fields do successfully saved but the others - don't, then if's definitely something wrong with field names, or validation. And the DB/mongoose doesn't trigger you an error, because the document has already been saved, even in particular.

Conversion from XML to Json removes 0 in Azure Data Factory

I am converting XML files to Json(gzip compression) using Azure Data Factory.
However , I observe that in the XML file I have the values stored as 0123456789. However , when this is converted to Json it is saved as "value" : 123456789. Without 0.
I would like to keep the Json values as-is from the XML . Please provide suggestions for the same.
I recently found using data flow will solve the problem.
I created a simple test. My xml file is as follows:
<?xml version="1.0"?>
<note>
<to>George</to>
<from>John</from>
<heading>Reminder</heading>
<body>Don't forget the meeting!</body>
<number>0123456789</number>
</note>
Set the xml file as the source data. Please don't import Projection.
By default, all columns will be treated as string types.
The data preview is as follows:
Set the json file as the sink:
Select Output to single file and specify the file name.
The debug result is as follows:
That's all.
I've found that you can change the projection mapping using the "Script Editor"
So you can keep your "projection" by importing, during a debug session, the sample you are using.
Once your projection is done you can edit the "Script" to change the type of your tag
Select the Script Editor button
You should have a script that look like that :
source(output(
note as (to as string, from as string, heading as string, body as string, number as integer)
),
allowSchemaDrift: true,
validateSchema: false,
limit: 100,
ignoreNoFilesFound: false,
rowUrlColumn: 'fileName',
validationMode: 'none',
namespaces: true) ~> XmlFiles
You can change the type of your tag "number" to string type
source(output(
note as (to as string, from as string, heading as string, body as string, number as string)
),
allowSchemaDrift: true,
validateSchema: false,
limit: 100,
ignoreNoFilesFound: false,
rowUrlColumn: 'fileName',
validationMode: 'none',
namespaces: true) ~> XmlFiles

Deserialise JSON to polymorphic types based on a type field

I am using lift-json 2.6 and Scala 2.11.
I want to deserialise a JSON string representing a Map of 'sensors' to case classes (I don't care about serialisation back to JSON at all):
case class TemperatureSensor(
name: String, sensorType: String, state: TemperatureState)
case class TemperatureState(
on: Boolean, temperature: Float)
case class LightSensor(
name: String, sensorType: String, state: LightState)
case class LightState(
on: Boolean, daylight: Boolean)
What I have here are some common fields in each sensor class, with a type-dependant state field, discriminated by the sensorType property
The idea is I invoke a web service and get a map of sensor information back, this can be any number of any type of different sensors. I know the set of possible types in advance, but I do not know in advance which particular sensors will be returned.
The JSON looks like this:
{
"1":
{
name: "Temp1",
sensorType: "Temperature",
state:
{
on: true,
temperature: 19.4
}
},
"2":
{
name: "Day",
sensorType: "Daylight",
state:
{
on: true,
daylight: false
}
}
}
(The real data has many more fields, the above case classes and JSON is a cut-down version.)
To consume the JSON I start with:
val map = parse(jsonString).extract[Map[String,Sensor]]
This works when I omit the state fields of course.
How can the extraction process be told which type of state to choose at run-time, based on the value of the sensorType field? Or do I have to write a custom deserialiser?
This question relates specifically to lift-json, not any other JSON library.
Unfortunately, I have not used lift-json... But I recently tackled the same problem using play-json. Perhaps some of what I have done could be useful to you as well.
See my github page for code: DiscriminatedCombinators.scala

Swagger response class Map

I have a REST API that returns, essentially a Map of (String, Object) where Object is either
A custom bean (let's say class Bean) or
A List of elements, all of type Bean
In JSON, this translates very well to:
{
"key1":{
"val1":"some string",
"val2":"some other string",
"val3":"another string"
},
"key2":[
{
"val1":"some string",
"val2":"some other string",
"val3":"another string"
},
{
"val1":"some string",
"val2":"some other string",
"val3":"another string"
}
]
}
Via swagger annotations, is there a way to specify this kind of a dynamic Map as the response class?
Thanks
I read the 'Open API Specification' - 'Add support for Map data types #38' page. As far as I understand, it recommends to use additionalProperties, but I haven't managed to make it work with Swagger UI 2.1.4 (see my related question: Swagger: map of string, Object).
I have found the following work-around: define an object, with one property that is the key, and with an inner object as the value for the "key" property.
The display in Swagger UI is correct, but one does not see it is a map, so it is then needed to explain in the comment that this is actually a map.
In your case, I find it a bit weird to have once a Bean, and once a list of Beans: I would find it more logical to have an array of one Bean in the first case.
Still, you could do, for example:
your_property: {
description: "This is a map that can contain several objects indexed by different keys. The value can be either a Bean or a list of Beans.",
type: object,
properties: {
key_for_single_bean: {
description: "The value associated to 'key_for_single_bean' is a single Bean",
$ref: "#/definitions/Bean"
},
key_for_list_of_beans: {
description: "The value associated to 'key_for_list_of_beans' is an array of Beans",
type: array,
items: {$ref: "#/definitions/Bean"}
}
}
}

Does Mongoose only support embedded documents in arrays?

I have some data in MongoDB that looks like this:
{
name: "Steve",
location: {
city: "Nowhere, IL",
country: "The United States of Awesome"
}
}
I’m using objects to organize common data structures (like locations), which in Mongoose might map nicely to Schemas. Unfortunately, they don't appear to really work in Mongoose.
If I just embed an object, like this:
{
name: String,
location: {
city: String,
country: String
}
}
It appears to work, but exhibits some bizarre behavior that causes problems for me (e.g. instance.location.location returns location, and subobjects inherit methods from the parent schema). I started a thread on the Mongoose list, but it hasn’t seen any action.
If I embed a Schema, like this:
{
name: String,
location: new Schema({
city: String,
country: String
})
}
…my application doesn’t start (Schema isn’t a type supported by Mongoose). Ditto for
{
name: String,
location: Object
}
…which wouldn’t be ideal, anyway.
Am I missing something or do my schemas not jive with Mongoose?
I did something similar:
var Topic = new Schema({
author : ObjectId
, title : String
, body : String
, topics : [Topic]
});
This worked fine in my tests. However, removing the array brakets resulted in an error. Looks like a bug to me.
https://github.com/LearnBoost/mongoose/blob/master/lib/mongoose/schema.js#L185
Dumping types, I only get String, Number, Boolean, DocumentArray, Array, Date, ObjectId, Mixed -- which appears to be on purpose, schema/index.js doesn't look like it dynamically registers new Schemas to the list of types, so I am guessing this isn't a supported use case, yet.
https://github.com/LearnBoost/mongoose/issues/188
"Embedding single docs is out of the question. It's not a good idea (just use regular nested objects)"
Josh
It looks like this was a bug, it’s been fixed in Mongoose 2.0!