OrientDB auto creation of vertex/edge schema classes - orientdb

As far as I can tell there is no way to specify that an existing schema class should be used when a matching label is specified but default to the general V/E classes otherwise. I have few custom E sub classes that I would like to use but I don't want other edge labels to cause the creation of additional sub classes. The API I'm using is TinkerPop-based and I cannot explicitly specify vertex/edge classes.
The OrientConfigurableGraph.setUseClassForEdgeLabel(boolean) setting is an all or nothing option. If it is set to true schema classes are created for all labels and if it is set to false new vertex/edge instances are set to the general V/E classes even if there is a matching class. Am I correct about this? I would like a configuration option that allows the use of matching schema classes if they are available in the schema but without automatically creating others when there is no match. I'm using version 2.1.8.

After browsing the OrientDB v2.1.x reference documentation and javadocs, i couldn't find a configuration option that configures the graph the way you want, so what you can do, is to open a ticket in the issue tracker at github and request that feature.
Although, in the meantime, you can control this programatically with the Graph API with the custom vertex/edges functions, as explained in the documentation, not ideal, as you need to control this programatically, but for now is the closest i could find.

Related

Change default stereotype: Upon adding a Column to a DB-Table, I would like the col stereotype to be my custom stereotpye

For our project, wo would like to document certain information with the model in Enterprise Architect, in order to not have multiple sites where stuff get's documented.
A consequence is, that the default model of EAUML::table and EAUML::column is insufficient.
Therefore I started to extend the classes to add our custom properties as shown below.
Now (after exporting and importing a MDG Profile) I am able to change our defined tables to our new stereotype.
Question here is: Is there a way to change the default column stereotype to "CUSTOM Col" for the "CUSTOM Table" instances? I don't seem to find a way. I imagine with a script in the right place or a "substitution" of the right kind. But I am an EA-Newby and have no clue how to solve this.
Thanks for any pointers.
Best

GraphQL schema management

Is there a plugin or way to rearrange a single graphql schema file. Basically I want my graphql schema to be more organized. For example, All types should be at the top, followed by type Query followed by Mutations followed by Subscription and so on. Something similar to the code arrangement we do with class files in Intellij or other IDEs (i.e, constants at the top, followed by private variable, methods and so on)
Yes! There is the great package called format-graphql by Gajus Kuizinas that does exactly this -- enforcing consistent style on your schema. It is somewhat opinionated, so you can either 1) deal with the current custom options provided when you configure the package or 2) fork it, and rewrite this core utility to achieve the desired effect.

Combining Spring Data query builder with Spring Data JPA Specifications?

Spring Data allows you to declare methods like findByLastname() in your repository interface and it generates the queries from the method name automatically for you.
Is it possible to somehow have these automatically-generated queries also accept a Specification, so that additional restrictions can be made on the data before it's returned?
That way, I could for example call findByLastname("Ted", isGovernmentWorker()), which would find all users that have the last name Ted AND who satisfy the isGovernmentWorker() specification.
I need this because I'd like the automated query creation provided by Spring Data and because I still need to be able to apply arbitrary specifications at runtime.
There is no such feature. Specifications can only be applied on JpaSpecificationExecutor operations.
Update
The data access operations are generated by a proxy. Thus if we want to group the operations (as in findByName + Criteria) in a single SELECT call, the proxy must understand and support this kind of usage; which it does not.
The intended usage, when employing Specification API would look like this for your case:
findAll(Specifications.where(hasLastName("Ted")).and(isGovernmentWorker())
Spring data allows you to implement custom repository and use Specifications or QueryDSL.
Please see this article.
So at the end you will have one YourCustomerRepository and appropriate YourRepositoryImpl implementation, where you will put your findByLastname("Ted", isGovernmentWorker()) method.
And then YourRepository should extend YourCustomerRepository interface.

Can an API in SOAP/WSDL be kept backwards compatible easily?

When using an IPC library, it is important that it provides the possibility that both client and server can communicate even when their version of the API differs. As I'm considering using SOAP for our client/server application, I wonder whether a SOAP/WSDL solution can deal with API changes well.
For example:
Adding parameters to existing functions
Adding variables to existing structs that are used in existing functions
Removing functions
Removing parameters from existing functions
Removing variables from existing structs that are used in existing functions
Changing the type of a parameter used in an existing function
Changing the order of parameters in an existing function
Changing the order of composite parts in an existing struct
Renaming existing functions
Renaming parameters
Note: by "struct" I mean a composite type
As far as I know there is not such stuff as per the SOAP/WSDL standard. But tools exists to cope with such issues. For instance, in Glassfish you can specify XSL stylesheet to transform the request/response of a web service. Other solution such as Oracle SOA suite offer much more elaborated tools to manage versioning of web service and integration of component together. Message can be routed automatically to different version of a web service and/or transformed. You will need to check what your target infrastructure offers.
EDIT:
XML and XSD is more flexible regarding evolution of the schema than types and serialization in object-oriented languages. Some stuff can be made backward compatible by simply declaring them as optional, e.g.
Adding parameters to existing functions - if a parameter is optional, you get a null value if the client doesn't send it
Adding variables to existing structure that are used in existing functions - if the value is optional, you get null if the client doesn't provide it
Removing functions - no magic here
Removing parameters from existing functions - parameters sent by the client will be superfluous according to the new definition and will be omitted
Removing variables from existing structure that are used in existing functions - I don't know in this case
Changing the type of a parameter used in an existing function - that depends on the change. For a simple type the serialization/deserialization may still work, e.g. String to int.
Note that I'm not 100% sure of the list. But a few tests can show you what works and what doesn't. The point is that XML is sent over the wire, so it gives some flexibility.
It doesn't. You'll have to manage that manually somehow. Typically by creating a new interface as you introduce major/breaking changes.
More generally speaking, this is an architectural problem, rather than a technical one. Once an interface is published, you really need to think about how to handle changes.

Entity Framework and Encapsulation

I would like to experimentally apply an aspect of encapsulation that I read about once, where an entity object includes domains for its attributes, e.g. for its CostCentre property, it contains the list of valid cost centres. This way, when I open an edit form for an Extension, I only need pass the form one Extension object, where I normally access a CostCentre object when initialising the form.
This also applies where I have a list of Extensions bound to a grid (telerik RadGrid), and I handle an edit command on the grid. I want to create an edit form and pass it an Extension object, where now I pass the edit form an ExtensionID and create my object in the form.
What I'm actually asking here is for pointers to guidance on doing this this way, or the 'proper' way of achieving something similar to what I have described here.
It would depend on your data source. If you are retrieving the list of Cost Centers from a database, that would be one approach. If it's a short list of predetermined values (like Yes/No/Maybe So) then property attributes might do the trick. If it needs to be more configurable per-environment, then IoC or the Provider pattern would be the best choice.
I think your problem is similar to a custom ad-hoc search page we did on a previous project. We decorated our entity classes and properties with attributes that contained some predetermined 'pointers' to the lookup value methods, and their relationships. Then we created a single custom UI control (like your edit page described in your post) which used these attributes to generate the drop down and auto-completion text box lists by dynamically generating a LINQ expression, then executing it at run-time based on whatever the user was doing.
This was accomplished with basically three moving parts: A) the attributes on the data access objects B) the 'attribute facade' methods at the middle-tier compiling and generation dynamic LINQ expressions and C) the custom UI control that called our middle-tier service methods.
Sometimes plans like these backfire, but in our case it worked great. Decorating our objects with attributes, then creating a single path of logic gave us just enough power to do what we needed to do while minimizing the amount of code required, and completely eliminated any boilerplate. However, this approach was not very configurable. By compiling these attributes into the code, we tightly coupled our application to the datasource. On this particular project it wasn't a big deal because it was a clients internal system and it fit the project timeline. However, on a "real product" implementing the logic with the Provider pattern or using something like the Castle Projects IoC would have allowed us the same power with a great deal more configurability. The downside of this is there is more to manage, and more that can go wrong with deployments, etc.