XSD import into another XSD file - eclipse

I have a problem with imported xsd's.
i have 3 xsd service.xsd, header.xsd and inputmessage.xsd
inputmessage.xsd contains the root element.
service.xsd imports header.xsd and inputmessage xsd.
while generating sample xml of service.xsd in eclipse i get the following error "No root element exists since the scheme provided has no global elements".

The error you are seeing is typically due to use of schema documents which do not declare an outer element (a 'root element'). The schemas with which you are working may define only complex types (likely with enclosed elements). The significance of the element w.r.t. file creation is that an element defines the concrete implementation of a type in an xml file (i.e. the name of the element from the schema becomes the tag name in the xml file). The complex type defines the structure that would apply to an element which is of that type.
In your service.xsd file, try inserting the following (you may need to work with the prefix binding to be consistent with your schema file):
<element name="rootElement" type="tns:LocallyDefinedType" />
where 'tns' is bound to the schema target namespace and 'LocallyDefinedType' is the name of a complex type defined in the schema document (the type you are hoping to see in the generated xml document).
If this does not help, post your schema documents (or some appropriate dummied-up examples) and a more targeted element declaration can be provided.

Related

Drools : applying same rules on all attributes

I am new to Drools, we are trying to create basic validation rules like a NULL check, etc. using the Drools n Scala framework.
I have a source file which has 200 attributes, need to apply NULL-check rule on all these attributes,
is there any easy way to do this? or do I need to create 200 rules for each attribute?
Thanks in advance.
Assuming you have a POJO ("plain old java object", getters/setters and some private variables to hold values) or modern java Records (effectively the same thing), then the answer is no: you need separate rules. For this scenario, the only way to check that field "name" is null is to actually assert against that field like this:
rule "example - name is null"
when
ExampleObject( name == null )
then
System.out.println("Name is null.");
end
However there exist other data structures -- for example, Map and its sibling types -- where you can reference the fields by name. In this case you could theoretically iterate through all of the field names and find the one whose value is empty.
So, for example, Map has a keySet() method which returns a set of fields -- you could iterate through this keyset and for each key check that there is a non-null value present in the map.
rule "example with map"
when
$map: Map()
$keys: Set() from $map.keySet()
$key: String() from $keys
String( this == null ) from $map.get($key)
// or this might work, not sure if the "this" keyword allows this syntax:
// Map( this[$key] == null ) from $map
then
System.out.println($key + " is missing/null");
end
This would require converting your Java object into a Map before passing into the rules.
However I DO NOT RECOMMEND this approach. Maps are extremely un-performant in rules because of how they serialize/deserialize. You will use a ton of unnecessary heap when firing them. If you look at how a HashMap serializes, for example, by peeking at its source code you'll see that it actually contains a bunch of "child" data structures like entryset and keyset and things like that. When using "new", those child structures are only initialized if and when you need them; but when serializing/deserializing, they're created immediately even if you don't need them.
Another solution would be to use Java reflection to get the list of declared field names, and then iterate through those names using reflection to get the value out for that field. In your place I'd do this in Java (reflection is problematic enough without trying to do it in Drools) and then if necessary invoke such a utility function from Drools.

How does instance-identifier looks like in yang model?

AS far as I understand, instance-identifier type has an XPath statement which points to some node is a tree. And what's next? How does instance-identifier identify this node? How do I apply instance-identifier to the node it points to? Or do I get it totally wrong...
I also don't have any example of this except those found in google like
leaf instance-identifier-leaf {
type instance-identifier;
}
An instance-identifier can be a reference to any data node in the system.
Think of it as a pointer; it doesn't contain the data itself, just a reference to it (e.g. an address)
It is useful for example to represent a reference to an object that is modeled as a YANG container or list instance..
Given that YANG data can be expressed as an XML document, the way you 'point' to a specific element within that is therefore similar to 'pointing' to a specific XML element. The way you do that in XML is by using XPath, which allows to use a
Here's an example:
container a {
list b {
key x;
leaf x { type int8; }
list c {
key y;
leaf y { type int8; }
}
}
}
leaf ref {
type instance-identifier;
}
So imagine that a real system has its datastore containing this data (for simplification, I'm using XML format, and ignoring namespaces; a real system doesn't need to keep its datastore in XML format):
<a>
<b>
<x>1</x>
</b>
<b>
<x>5</x>
<c>
<y>1</y>
</c>
<c>
<y>2</y>
</c>
</b>
<b>
<x>10</x>
<c>
<y>5</y>
</c>
</b>
</a>
So basically we have a bunch of list entries, some of them cascaded.
If you'd represent all these nodes in xpath, you'd get a list with:
/a
/a/b[x=1]
/a/b[x=5]
/a/b[x=5]/c[y=1]
/a/b[x=5]/c[y=2]
/a/b[x=10]
/a/b[x=10]/c[y=5]
If you had an instance identifier outside this hierarchy called ref, it could take any of these xpaths as possible values, as a string value. So it contain a reference to one of those nodes; it would not contain the node itself, just a reference to it.
One final note is that it is not mandatory that the node referenced by the instance-identifier actually exists in the datastore; you can have an instance-identifier that is pointing to a non-existent node. There is a yang statement (requires-instance) that can be added as a substatement of the type statement, that allows to control whether only existing instances can be referenced, or whether non-existing ones can also be accepted.
Regarding the format of the value, note that the way the instance-identifier is represented depends on the protocol you are using. An instance identifier in NETCONF is different from one in RESTCONF (although they are very similar).
You could imagine a CLI to have a custom way to represent YANG objects, for example:
A data node is defined with the xpath /a/b[x=5]/c[y=1]
In CLI, that node address is viewed as c-5-1 (just an example)
If you had an instance-identifier pointing to that object, its value would be the string 'c-5-1' in CLI, but in NETCONF it would still be the xpath.
In short, the format depends on the protocol you are using.

xtext scope code generation dependant on different file

I have two grammars A and B and two files a and b (using grammars A and B respectively). The file a specify variables names, b specify the filename of a.
In b using the the file a want to:
reference variables defined in a
during code generation of b I want to include the contents of the file created generated for a.
How can this be done in xtext?
Update 1
Example grammar B
Model:
ref_model=RefModel
ref_vars+=[Vars]+
;
RefModel:
'reference' 'file' name=ID
;
Where RefModel define where the file a can be located and Vars are defined in a.
In the past we used to use importURI for that, but you can do that through scoping on your own also.
If you for instance want to use the simple name of the file, you should make the name in B a reference to the root element of A.
Model:
ref_model=RefModel
ref_vars+=[Vars]+
;
RefModel:
'reference' 'file' name=[ModelA]
;
Then you need to index the root element of A models using the simple file name of the resource URI.

How do purely functional compilers annotate the AST with type info?

In the syntax analysis phase, an imperative compiler can build an AST out of nodes that already contain a type field that is set to null during construction, and then later, in the semantic analysis phase, fill in the types by assigning the declared/inferred types into the type fields.
How do purely functional languages handle this, where you do not have the luxury of assignment? Is the type-less AST mapped to a different kind of type-enriched AST? Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?
Are there purely functional programming tricks that help the compiler writer with this problem?
I usually rewrite a source (or an already several steps lowered) AST into a new form, replacing each expression node with a pair (tag, expression).
Tags are unique numbers or symbols which are then used by the next pass which derives type equations from the AST. E.g., a + b will yield something like { numeric(Tag_a). numeric(Tag_b). equals(Tag_a, Tag_b). equals(Tag_e, Tag_a).}.
Then types equations are solved (e.g., by simply running them as a Prolog program), and, if successful, all the tags (which are variables in this program) are now bound to concrete types, and if not, they're left as type parameters.
In a next step, our previous AST is rewritten again, this time replacing tags with all the inferred type information.
The whole process is a sequence of pure rewrites, no need to replace anything in your AST destructively. A typical compilation pipeline may take a couple of dozens of rewrites, some of them changing the AST datatype.
There are several options to model this. You may use the same kind of nullable data fields as in your imperative case:
data Exp = Var Name (Maybe Type) | ...
parse :: String -> Maybe Exp -- types are Nothings here
typeCheck :: Exp -> Maybe Exp -- turns Nothings into Justs
or even, using a more precise type
data Exp ty = Var Name ty | ...
parse :: String -> Maybe (Exp ())
typeCheck :: Exp () -> Maybe (Exp Type)
I cant speak for how it is supposed to be done, but I did do this in F# for a C# compiler here
The approach was basically - build an AST from the source, leaving things like type information unconstrained - So AST.fs basically is the AST which strings for the type names, function names, etc.
As the AST starts to be compiled to (in this case) .NET IL, we end up with more type information (we create the types in the source - lets call these type-stubs). This then gives us the information needed to created method-stubs (the code may have signatures that include type-stubs as well as built in types). From here we now have enough type information to resolve any of the type names, or method signatures in the code.
I store that in the file TypedAST.fs. I do this in a single pass, however the approach may be naive.
Now we have a fully typed AST you could then do things like compile it, fully analyze it, or whatever you like with it.
So in answer to the question "Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?", I cant say definitively that this is the case, but it is certainly what I did, and it appears to be what MS have done with Roslyn (although they have essentially decorated the original tree with type info IIRC)
"Are there purely functional programming tricks that help the compiler writer with this problem?"
Given the ASTs are essentially mirrored in my case, it would be possible to make it generic and transform the tree, but the code may end up (more) horrendous.
i.e.
type 'type AST;
| MethodInvoke of 'type * Name * 'type list
| ....
Like in the case when dealing with relational databases, in functional programming it is often a good idea not to put everything in a single data structure.
In particular, there may not be a data structure that is "the AST".
Most probably, there will be data structures that represent parsed expressions. One possible way to deal with type information is to assign a unique identifier (like an integer) to each node of the tree already during parsing and have some suitable data structure (like a hash map) that associates those node-ids with types. The job of the type inference pass, then, would be just to create this map.

How to workaround the XmlSerialization issue with type name aliases (in .NET)?

I have a collection of objects and for each object I have its type FullName and either its value (as string) or a subtree of inner objects details (again type FullName and value or a subtree).
For each root object I need to create a piece of XML that will be xml de-serializable.
The problem with XmlSerializer is that i.e. following object
int age = 33;
will be serialized to
<int>33</int>
At first this seems to be perfectly ok, however when working with reflection you will be using System.Int32 as type name and int is an alias and this
<System.Int32>33</System.Int32>
will not deserialize.
Now additional complexity comes from the fact that I need to handle any possible data type.
So solutions that utilize System.Activator.CreateInstance(..) and casting won't work unless I go down the path of code gen & compilation as a way of achieving this (which I would rather avoid)
Notes:
A quick research with .NET Reflector revealed that XmlSerializer uses internal class TypeScope, look at its static constructor to see that it initializes an inner HashTable with mappings.
So the question is what is the best (and I mean elegant and solid) way to workaround this sad fact?
I don't exactly see, where your problem originally stems from. XmlSerializer will use the same syntax/mapping for serializing as for deserializing, so there is no conflict when using it for both serializing and deserializing.
Probably the used type-tags are some xml-standard thing, but not sure about that.
I guess the problem is more your usage of reflection. Do you instantiate your
imported/deserialized objects by calling Activator.CreateInstance ?
I would recommend the following instead, if you have some type Foo to be created from the xml in xmlReader:
Foo DeserializedObject = (Foo)Serializer(typeof(Foo)).Deserialize(xmlReader);
Alternatively, if you don't want to switch to the XmlSerializer completely, you could do some preprocessing of your input. The standard way would then be to create some XSLT, in which you transform all those type-elements to their aliases or vice versa. then before processing the XML, you apply your transformation using System.Xml.Xsl.XslCompiledTransform and use your (or the reflector's) mappings for each type.
Why don't you serialize all the field's type as an attribute?
Instead of
<age>
<int>33</int>
</age>
you could do
<age type="System.Int32">
33
</age>