Cannot find the declaration of element -SOAP - dom

Document document = null;
DocumentBuilder parser = DocumentBuilderFactory.newInstance().newDocumentBuilder();
document = parser.paser(xmlFilePath);
ParseException
SaxException
IOException
returned document without exception.
Using xmllist "schema" xml validated
SchemaFactory schema = SchemaFactory.newInstance(XMLConstanst.W3C_XML_SCHEMA_NS_URI);
Schema schema = schemaFactory.newSchema(new Source []{new StreamSource(xsd1), new StreamSource(xsd2), new StreamSource(xsd3), newStreamSource(xsd4)})
Validator validator = schema.newValidator();
validator.setErrorHandler(new CustomHandler);
validator.validate(new DOMSource(document);
SAXParseException cvc-elt.1 Cannot find the declaration of element "ElementinQuestion"
String node = document.getNodeName();
node = #document
When the following is added the node returns the same exception above
String node = document.getFirstChildNode().getNodeName();
node = "ElementinQuestion"
xsd
<s: element name = "ElementinQuestion" type = elementinquestion:ElementinQuestion"
What else can I check?

Added this line and the exception was resolved prior to creating the document
(Found this scrolling down to the bottom
https://stackoverflow.com/questions/39738095/cvc-elt-1-cannot-find-the-declaration-of-element-soapenvelope)
Document document = null;
DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
builderFactory.setNamespaceAware(true);
DocumentBuilder parser = DocumentBuilderFactory.newInstance();
document = parser.paser(xmlFilePath);

Related

c# - The given ColumnName 'xxxxxxx' does not match up with any column in data source

In my code has an issue but I can't see what issue in this. Column names are same word by word and it is working, If I use 1 column in csv file but when I try out more then 2-3 column fields it is giving the error below. I have checked read lots of article so I can't fix the error. What can be happen with is this lines. DB already was created with similar fields.
private void DBaktar()
{
string SQLServerConnectionString = "Server =.\\SQLEXPRESS; Database = Qiti; User Id = sa; Password = 7731231xx!!;";
string CSVpath = #"D:\FTP\"; // CSV file Path
string CSVFileConnectionString = String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};;Extended Properties=\"text;HDR=Yes;FMT=Delimited\";", CSVpath);
var AllFiles = new DirectoryInfo(CSVpath).GetFiles("*.CSV");
string File_Name = string.Empty;
foreach (var file in AllFiles)
{
try
{
DataTable dt = new DataTable();
using (OleDbConnection con = new OleDbConnection(CSVFileConnectionString))
{
con.Open();
var csvQuery = string.Format("select * from [{0}]", file.Name);
using (OleDbDataAdapter da = new OleDbDataAdapter(csvQuery, con))
{
da.Fill(dt);
}
}
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(SQLServerConnectionString))
{
bulkCopy.ColumnMappings.Add("LKod", "LKod");
bulkCopy.ColumnMappings.Add("info", "info");
bulkCopy.ColumnMappings.Add("Codex", "Codex");
bulkCopy.ColumnMappings.Add("LthNo", "LthNo");
bulkCopy.ColumnMappings.Add("Datein", "Datein");
bulkCopy.DestinationTableName = "U_Tik";
bulkCopy.BatchSize = 0;
bulkCopy.EnableStreaming = true;
bulkCopy.WriteToServer(dt);
bulkCopy.Close();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
Error exception;
The given ColumnName 'LKod' does not match up with any column in data
source.
ex.StackTrace;
at
System.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServerCommon(Int32
columnCount) at
System.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServerAsync(Int32
columnCount, CancellationToken ctoken) at
System.Data.SqlClient.SqlBulkCopy.WriteToServer(DataTable table,
DataRowState rowState) at
System.Data.SqlClient.SqlBulkCopy.WriteToServer(DataTable table)
Some information can be found here: https://sqlbulkcopy-tutorial.net/columnmapping-does-not-match
Cause
You didn't provide any ColumnMappings, and there is more column in the source than in the destination.
You provided an invalid column name for the source.
You provided an invalid column name for the destination.
Solution
ENSURE to provide a ColumnMappings
ENSURE all values for source column name are valid and case sensitive.
ENSURE all values for destination column name are valid and case sensitive.
MAKE the source case insensitive
I have found a solution and working 100% true.. The link below, I hope become a path who need that.
https://johnnycode.com/2013/08/19/using-c-sharp-sqlbulkcopy-to-import-csv-data-sql-server/

Kafka streams: adding dynamic fields at runtime to avro record

I want to implement a configurable Kafka stream which reads a row of data and applies a list of transforms. Like applying functions to the fields of the record, renaming fields etc. The stream should be completely configurable so I can specify which transforms should be applied to which field. I'm using Avro to encode the Data as GenericRecords. My problem is that I also need transforms which create new columns. Instead of overwriting the previous value of the field they should append a new field to the record. This means the schema of the record changes. The solution I came up with so far is iterating over the list of transforms first to figure out which fields I need to add to the schema. I then create a new schema with the old fields and new fields combined
The list of transforms(There is always a source field which gets passed to the transform method and the result is then written back to the targetField):
val transforms: List[Transform] = List(
FieldTransform(field = "referrer", targetField = "referrer", method = "mask"),
FieldTransform(field = "name", targetField = "name_clean", method = "replaceUmlauts")
)
case class FieldTransform(field: String, targetField: String, method: String)
method to create the new schema, based on the old schema and the list of transforms
def getExtendedSchema(schema: Schema, transforms: List[Transform]): Schema = {
var newSchema = SchemaBuilder
.builder(schema.getNamespace)
.record(schema.getName)
.fields()
// create new schema with existing fields from schemas and new fields which are created through transforms
val fields = schema.getFields ++ getNewFields(schema, transforms)
fields
.foldLeft(newSchema)((newSchema, field: Schema.Field) => {
newSchema
.name(field.name)
.`type`(field.schema())
.noDefault()
// TODO: find way to differentiate between explicitly set null defaults and fields which have no default
//.withDefault(field.defaultValue())
})
newSchema.endRecord()
}
def getNewFields(schema: Schema, transforms: List[Transform]): List[Schema.Field] = {
transforms
.filter { // only select targetFields which are not in schema
case FieldTransform(field, targetField, method) => schema.getField(targetField) == null
case _ => false
}
.distinct
.map { // create new Field object for each targetField
case FieldTransform(field, targetField, method) =>
val sourceField = schema.getField(field)
new Schema.Field(targetField, sourceField.schema(), sourceField.doc(), sourceField.defaultValue())
}
}
Instantiating a new GenericRecord based on an old record
val extendedSchema = getExtendedSchema(row.getSchema, transforms)
val extendedRow = new GenericData.Record(extendedSchema)
for (field <- row.getSchema.getFields) {
extendedRow.put(field.name, row.get(field.name))
}
I tried to look for other solutions but couldn't find any example which had changing data types. It feels to me like there must be a simpler cleaner solution to handle changing Avro schemas at runtime. Any ideas are appreciated.
Thanks,
Paul
I have implemented Passing Dynamic values to your avro schema and validating union to in schema
Example :-
RestTemplate template = new RestTemplate();
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<String> entity = new HttpEntity<String>(headers);
ResponseEntity<String> response = template.exchange(""+registryUrl+"/subjects/"+topic+"/versions/"+version+"", HttpMethod.GET, entity, String.class);
String responseData = response.getBody();
JSONObject jsonObject = new JSONObject(responseData); // add your json string which you will pass from postman
JSONObject jsonObjectResult = new JSONObject(jsonResult);
String getData = jsonObject.get("schema").toString();
Schema.Parser parser = new Schema.Parser();
Schema schema = parser.parse(getData);
GenericRecord genericRecord = new GenericData.Record(schema);
schema.getFields().stream().forEach(field->{
genericRecord.put(field.name(),jsonObjectResult.get(field.name()));
});
GenericDatumReader<GenericRecord>reader = new GenericDatumReader<GenericRecord>(schema);
boolean data = reader.getData().validate(schema,genericRecord );

OrientDB - exception when I try to save an embeddedlist

I have a class named CalculationFunctionGroup where I have an attribute like this:
List<CalculationFunction> functions;
on my OrientDB I have a table named CalculationFunctionGroup with a property functions definaed as EMBEDDEDLIST and linked class CalculationFunction.
When I try to save an object of type CalculationFunctionGroup an exception raises.
The exception tell me:
The field 'functions' has been declared as EMBEDDEDLIST with linked class 'CalculationFunction' but the record has no class.
I try to find this exception in OrientDB source code, and I find this:
There's a check in ODocument class in the method validateEmbedded where there are these code lines:
if (doc.getImmutableSchemaClass() == null)
throw new OValidationException("The field '" + p.getFullName() + "' has been declared as " + p.getType()
+ " with linked class '" + embeddedClass + "' but the record has no class");
So, I don't understand how can I valued the immutableschemaclass property.
Where I try to set my field value from Java, I use this command line:
this.data.field(fieldName, value, OType.EMBEDDEDLIST);
where data is my ODocument instance, fieldName is functions, value is my List of CalculationFunction and OType is EMBEDDEDLIST.
Used Orient version is 2.2.0
EDIT #1
I try this after Alessandro Rota answer, but the error is the same:
ODocument myEntity = new ODocument("CalculationFunctionGroup");
myEntity.field("referenceItem", object.getReferenceItem().getData());
db.save(myEntity);
db.commit();
In this code snippet I've changed the nature of my objct (the original is a typied object as CalculationFunctionGroup and now is an ODocument). But the error is the same.
Another try I've done, the ODocument myEntity has not attached functions (list of CalculationFunction) but the error raises too
EDIT #2
I've tried with code snippet of Alessandro Rota and works fine.
But when I add as field of CalculationFunction a link field I've the same error! Why?
If I add a link field instead of object.getData() with object.getRawField("#rid") it works fine too.
I don't understand because raises that error message, and the reason of different behaviour when I use only #rid field instead complete object
EDIT #3
Latest news:
This is my test scenario:
I have this table:
CalculationFunction with these property (schemafull):
referenceItem LINK
functions EMBEDDEDLIST
When I try to save, I write this code:
ODocument myGroup = new ODocument("CalculationFunctionGroup");
Object rid = null;
if (object.getField("referenceItem") instanceof RegistryMetadata) {
rid = ((RegistryMetadata)(object.getField("referenceItem"))).getRawField("#rid");
} else if (object.getField("referenceItem") instanceof PayrollRegistryMetadata) {
rid = ((PayrollRegistryMetadata)(object.getField("referenceItem"))).getRawField("#rid");
} else if (object.getField("referenceItem") instanceof PreCalculationMetadata) {
rid = ((PreCalculationMetadata)(object.getField("referenceItem"))).getRawField("#rid");
} else if (object.getField("referenceItem") instanceof CalculationMetadata) {
rid = ((CalculationMetadata)(object.getField("referenceItem"))).getRawField("#rid");
}
myGroup.field("referenceItem", rid, OType.LINK);
myGroup.field("scenario", ((Scenario)object.getField("scenario")).getRawField("#rid"));
List<ODocument> lstFunctions = new ArrayList<ODocument>();
if (object.getField("functions") != null) {
Iterable<ODocument> lstFun = (Iterable<ODocument>) object.getField("functions");
Iterator<ODocument> itFun = lstFun.iterator();
while(itFun.hasNext()) {
ODocument currFun = itFun.next();
ODocument oFun = new ODocument("CalculationFunction");
oFun.field("name", currFun.field("name"));
oFun.field("code", currFun.field("code"));
oFun.field("language", currFun.field("language"));
lstFunctions.add(oFun);
}
}
myGroup.field("functions", lstFunctions, OType.EMBEDDEDLIST);
myGroup.save();
db.commit();
This code goes in error, but... If I write this:
myGroup.field("referenceItem2", rid, OType.LINK);
The code works fine.
The difference: referenceItem is a schemafull property, referenceItem2 is a schemaless field.
You could use this code
ODocument cf1= new ODocument("CalculationFunction");
cf1.field("name","Function 1",OType.STRING);
ODocument cf2= new ODocument("CalculationFunction");
cf2.field("name","Function 2",OType.STRING);
List<ODocument> list=new ArrayList<ODocument>();
list.add(cf1);
list.add(cf2);
ODocument p = new ODocument("CalculationFunctionGroup");
p.field("functions", list, OType.EMBEDDEDLIST);
p.save();
Edit 2
If you want add a link you could use this code
ODocument cf1= new ODocument("CalculationFunction");
cf1.field("name","Function 1",OType.STRING);
ODocument p = new ODocument("CalculationFunctionGroup");
p.field("mylink", cf1, OType.LINK);
p.save();
EDIT 3
I have created schemafull property mylink2
I have used this code
ODocument cf1= new ODocument("CalculationFunction");
cf1.field("name","Function 1",OType.STRING);
cf1.save();
ODocument p = new ODocument("CalculationFunctionGroup");
p.field("mylink", cf1.getIdentity(), OType.LINK);
p.field("mylink2", cf1.getIdentity(), OType.LINK);
p.save();
and I got
Hope it helps.

Cascade save on OrientDB Document API when using embedded types

Given the following test:
// setup
OClass driver = getDatabase().getMetadata().getSchema().createClass(DRIVER);
OClass car = getDatabase().getMetadata().getSchema().createClass(CAR);
car.createProperty(DRIVERS, OType.EMBEDDEDLIST, driver);
OClass team = getDatabase().getMetadata().getSchema().createClass(TEAM);
team.createProperty(CARS, OType.EMBEDDEDSET, car);
// exercise
ODocument alonso = new ODocument(DRIVER).field("name", "Fernando Alonso").field("nationality", "Spanish")
.field("yearOfBirth", 1981);
ODocument button = new ODocument(DRIVER).field("name", "Jenson Button").field("nationality", "british")
.field("yearOfBirth", 1980);
ODocument mp30 = new ODocument(CAR).field(DRIVERS, Arrays.asList(new ODocument[] { alonso, button }));
Set<ODocument> cars = new HashSet<>();
cars.add(mp30);
ODocument mclarenF1Team = new ODocument(TEAM).field(CARS, cars);
mclarenF1Team.save();
// verify
assertEquals(1, getDatabase().countClass(TEAM));
assertEquals(1, getDatabase().countClass(CAR));
assertEquals(2, getDatabase().countClass(DRIVER));
The second assertion fails:
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at foo.orientdb.dataaccessapi.StoreJSonIT.testSchemaFull(StoreJSonIT.java:68)
Why does it fail?
The properties CAR and DRIVER are created as embedded list and embedded set, shouldn't a single save in mclarenF1Team do a cascade save for the embedded documents?
Embedded List/Set means that the documents you create will be embedded (Saved) in the parent document and not in his own class/cluster.
IF you want to achieve that behaviour you should use links
See here
http://orientdb.com/docs/2.1/Concepts.html#relationships

Groovy sql to byte

I have to get a document that's stored in a postgres database and put it into a byte array.
In Java this works just fine
PreparedStatement ps = conn1.prepareStatement("SELECT document FROM documents WHERE documentname = ?");
ps.setString(1, "dingbatdocument.docx");
ResultSet rs = ps.executeQuery();
while (rs.next()) {
byte[] documentBytes = rs.getBytes(1);
}
but I have to use groovy for this code and know nothing about how to do it, so far I've tried this
def newSpelling = "dingbatdocument.docx";
def val = sql.execute("SELECT document FROM documents WHERE documentname = ?", [newSpelling]) as byte[];
and get this error:
Caused by: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'true' with class 'java.lang.Boolean' to class 'byte'
at korg.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToNumber(DefaultTypeTransformation.java:146)
which to me says it is trying to asset that it has worked rather than giving me and actual byte array,
and this
def newSpelling = "dingbatdocument.docx";
byte[] val = sql.execute("SELECT document FROM documents WHERE documentname = ?", [newSpelling]);
and get this error:
Groovy script throws an exception of type class org.postgresql.util.PSQLException with message = This ResultSet is closed.
and finally this:
def reqdColName = "document";
def reqdDocument = "dingbatdocument.docx";
def query1 = "SELECT $reqdColName FROM documents WHERE documentname = '$reqdDocument'";
def documentBytes = conn1.executeQuery(query1).getArray(reqdColName);
which also gives me
Groovy script throws an exception of type class org.postgresql.util.PSQLException with message = This ResultSet is closed.
So my question is how do I get the same result in groovy as I do in java, a byte[] variable from my sql resultset?
Thanks in advance.
In the end it was quite easy, but not knowing Groovy it's different. Here is how I did it in the end,
def reqdColName = "document";
def reqdDocument = "documentName.docx";
def query1 = "SELECT * FROM documents WHERE documentname = '$reqdDocument'";
byte[] myData;
conn1.eachRow(query1){
row ->
if(debug){logger.severe(dI+thisTrace+"myData \n"
+"\t"+row.documentid+"\n"
+"\t"+row.documentname
+"\t"+row.versionmajor +"\n"
+"\t"+row.versionminor +"\n"
+"\t"+row.versionmini +"\n"
+"\t"+row.uploader +"\n"
+"\t"+row.approver +"\n"
+"\t"+row.uploaddate +"\n"
+"\t"+row.approvaldate +"\n"
// +"\t"+row.document+"\n"
);}
myData = row.document
}
myData is the byte[] repesentation of the document I needed.