I'm building a Node.js + PostgreSQL application based on plugin system where each component is a separate npm package. The behaviour of the app can be customised by adding and removing components. Each component may need to define its own data types (i.e. DB tables) or potentially extend the core types.
Are there design patterns for doing this kind of thing?
For example is it reasonable to let plugins run any kind of DB manipulation (like WordPress does)? Could some of PostgreSQL's more advanced features help avoid this, e.g. table inheritance? Or maybe use a separate schema for each plugin – any tips for making that work?
Related
Supposed you need to interact with/integrate 5+ different web service sources instead of tables from the database.
In first lane each WS supports read methods like search,list and single view (in short, no CUD from CRUD). Write access should also take place later.
What would be your approach to integrate those to extbase?
I'd like to stick to the conventions of models and repositories. In this case, i think we need to use Models and CustomRepositories. Should the repositories interact directly with the webservice or should we think bigger with an PersistenceManagerInterface implementation and extending the Generic/Query to e.g. paginate on the results by extbase conventions? What is about TypeConverters and PropertyMappers for linking between controller actions with the models "as parameters".
How could a mapping from JSON to the models look like? A TCA configuration is required for the DataMapper, but this is not available because no DB is used, so it cannot be used.
Where should you start?
I am developing a new project with spring boot and graphql. I am confused on how to proceed because there are 2 ways to develop it, one is via the graphqls file and Annotation based approach. I prefer Annotation based approach but are they stable. exmaple : https://github.com/leangen/graphql-spqr.
I second AllirionX's answer and just want to add a few details.
Firstly, to answer your question: yes, SPQR has been pretty stable for quite a while now. Many teams are successfully using it in production. The only reason it is still in 0.X versions is the lack of documentation, but an occasional small breaking change in the API does occur.
Secondly, I'd also like to add that going code-first doesn't mean you can't also go contract-first. In fact, I'd argue you should still develop in that style. The only difference is that you get to write your contracts as Java interfaces instead of a new language.
As I highlight in SPQR's README:
Note that developing in the code-first style is still effectively
schema-first, the difference is that you develop your schema not in
yet another language, but in Java, with your IDE, the compiler and all
your tools helping you. Breaking changes to the schema mean the
compilation will fail. No need for linters or other fragile hacks.
So whether the API (as described by the interfaces) changes as the other code changes is entirely up to you. And if you need the SDL for any reason, it can always be generated from the executable schema or the introspection result.
I don't think there is a good or a bad answer to the "how to proceed" question.
There are two different approaches to build your graphql server (with graphl-java, graphql-java-tools, graphql-spqr), and each method has its advantages and inconvenients. All those library propose a springboot starter. Note that I never used graphql-spqr.
Schema first (with graphql-java or graphql-java-tools)
In this approach you first create a SDL file. The graphql library will parse it, and "all" you have to do is wire each graphql type to its data fetcher. The graphql-java-tools can even do the wiring for you.
Advantage
no need to enter into the detail of how the Graphql schema is built server side
you have a nice graphqls schema file that can be read and used by a client, easying the charge of building a graphql client
you actually define your api first (SDL schema): changing the implementation of the api will not require any change client side
Inconvenient
no compile-time check. If something is not wired properly, an exception will be thrown at runtime. But this can be negated by using graphql-java-codegen that will generate for you the java classes and interfaces for your graphql types, unions, queries, enums, etc.
if using graphql-java (no auto wiring), I felt I had to write long boring data fetchers. So I switched to graphql-java-tools.
Code first (with graphql-java or grapqhl-java-tools or graphql-spqr)
The graphql schema is built programmatically (through annotation with graphql-spqr or by building a GraphQLSchema object in graphql-java)
Advantage
compile-time check
no need to maintain both the SDL and the Domain class
Inconvenient
as your schema is generated from your code base, changing your code base will change the api, which might not be great for the clients depending on it.
This is my opinion on those different framework and I would be happy to be shown that I am in the wrong. The ultimate decision depends on your project: the size, if there is an existing code base, etc.
If given an option to use Sling Models or WCM use class which one should be preferred when and why?
Is either of them better performance wise?
Thanks in Advance
Sling models are saving you a lot of time for accessing simple objects as the current page/resource, injecting some properties or services, adapting from resource or sling http request to your model. Sure with the use the plain API your code will execute a little bit faster, because you initialize only the objects you really need, but you have to do all that things "manually". I think that this sightly introduction is giving a good overview of all possible implementation you can go with. You can also have a look at the sightly official documentation. Below you can find a quick overview of the what you can expect and hopefully make your decision easier (quoted from the offical sightly documentation).
Java Use Provider
Advantages
Use-objects provided through bundles:
faster to initialise and execute than Sling Models for similar code
easy to extend from other similar Use-objects
simple setup for unit testing
Use-objects backed by Resources:
faster to initialise and execute than Sling Models for similar code
easy to override from inheriting components through search path
overlay or by using the sling:resourceSuperType property, allowing
for greater flexibility
business logic for components sits next to the Sightly scripts where
the objects are used
Disadvantages
Use-objects provided through bundles:
lacks flexibility in terms of component overlaying
Use-objects backed by Resources:
cannot extend other Java objects
the Java project might need a different setup to allow running unit
tests, since the objects will be deployed like content
Sling Models Use Provider
Advantages
convenient injection annotations for data retrieval
easy to extend from other Sling Models
simple setup for unit testing
Disadvantages
lacks flexibility in terms of component overlaying, relying on
service.ranking configurations
If you ask me I would always take a framework as sling models or slice which makes the development easier and faster. At the end the performance impact by using a framework is not really a problem, would be not the only one third party framework in the project. But if your project is performance oriented probably you could make some tests with all possibilities you have and decide if such a framework suits your needs (or just mix both).
Is Database class just a wrapper for ADO.NET which makes use of db simpler ? What's its limits ?
Yes - the Database Helper is a wrapper around ADO.NET. It is designed to minimize the code that a beginner needs to get started with querying databases, similar to how its done in PHP. Its limits depend on your point of view. As someone who is just starting to learn web development and databases, you might think that the helper is a stroke of genius. As a professional developer, you might not like the fact that it returns dynamic types or that it doesn't prevent people dynamically constructing their SQL and potentially opening up their application to SQL injection attacks.
I am looking to upgrade an existing perl web-based application and wondering if there are any suggestions on how to solve a particular problem:
The application is used by several clients who each have a very customized dataset behind the scenes. There is very little overlap in the dataset between clients. However, they all load and use the same software. There are numerous configuration files that tell the software how to process this client and understand it's customized dataset.
In essence, there are common functions but different datasets that those functions work upon. I'm looking for a way to abstract the datasets into an ORM. However, most ORMs seem to expect a common dataset behind the scenes. I need to either load the ORM modules dynamically based on the client being used or dynamically create the ORM structure based on the same.
e.g.
The software provides View/Edit/Delete functionality but
Client A
Manages tables
Client B
Manages automobiles
The View function loads configuration files and has custom template files for each client that are relevant to the type of data they are managing.
Any suggestions?
Check out Jorge
See Rose::DB::Object (RDBO).
It support's loading the database structure at runtime by its Loader package. John Siracusa, the author of RDBO is always kindly responding to question in #rdbo on irc.perl.org or the mailing list.
It's also very fast (once loaded) and powerful. I can really recommend it if you have a DB application more complex than any example app.