where and how to write pre/post query action in redwoodJS? - prisma

I'm in my first days on learning redwoodJS.
I'm just jump in from Django.
All I know, redwood web app flow like this :
web-client will call the API (apollo graphQL),
API side is a Prisma-Client.
In Django, we can write a 'signal' that will be called pre/post query.
I.e : pre-delete, post-delete, pre-add, post-add,etc etc.
Signal is 'attached' in models.py
My question : is there any doc that elaborate on how and where to write that 'signal' in RedwoodJS?
Looks like Prisma have 'middleware' approach for this, but i don't know where and how to do it in redwoodJS.
Sincerely
-bino-

A Redwood API directory includes Services, used by your GraphQL API or any other place in your backend code. A Service function typically imports the db object, which is the Prisma Client. From there, you can use Prisma Middleware. Here are a couple of related links.
Redwood Services:
https://redwoodjs.com/docs/services
Prisma Middleware:
https://www.prisma.io/docs/concepts/components/prisma-client/middleware

Related

Wrapping REST API with GraphQL / just using GraphQL

I am working a project where I will be integrating GraphQL to a backend Express server. Currently, the server's structure is much like a MVC pattern structure.
Controllers folder
functions to query data from a MySQL database and return it.
ex: file named Car.js and in there are functions such as getAllCars() and getCar(id)
Routers folder
endpoints that call the functions in the controllers and returns it to caller
ex: endpoint GET /cars that will call getAllCars() and return it
I want to wrap GraphQL on top of this and was wondering what if the best way to do this. As far as I know, each GraphQL type has fields and resolvers and the resolver is the one that will get the data (please correct me if I'm wrong).
So I guess my question is...
If I want to wrap GraphQL on this, in the resolver, do I call the endpoint that will fetch me the data?
If I have a controllers folder that is already handling the data access/modification in the db, can I simply just call the controller function in the resolver and don't necessarily 'need any endpoints'?
I hope this makes sense, I am still very new to GraphQL and am very excited to work with it.
Thank you!
Please find my answers below
If I want to wrap GraphQL on this, in the resolver, do I call the endpoint that will fetch me the data?
The GraphQL should always be served from a single end point. It is the query that changes but the end point will always be the same.
If I have a controllers folder that is already handling the data access/modification in the db, can I simply just call the controller function in the resolver and don't necessarily 'need any endpoints'?
This is debatable. It is a good practice to always separate out the handling the data access/ modification outside the controller like in a service layer.

Strapi: Initialize / populate database

When I deploy Strapi to a new server, I want to create and populate the database tables (PostgreSQL), particularly categories. How do I access production config, and create tables and category entries?
A hint on how-to approach this, would be much appreciated!
I know this is an old question, but i recently came upon the same issue.
Basically you should create the collections first, which result in the creation of models. Of course you also could create the models manually.
In the recent documentation you find a section about a bootstrap function.
docs bootstrap
The function is called at the start of the server.
The docs list the following use cases:
Here are some use cases:
Create an admin user if there isn't one.
Fill the database with some necessary data.
Load some environment variables.
The bootstrap function can be synchronous or asynchronous.
A great example can be found in the Plugin strapi-plugin-users-permissions
You can implement a new service or overwrite a function of an existing plugin.
the function initialize is implemented here async initialize
and called in the bootstrap function here
await ...initialize()
The initialize function is used to populate the database with the two roles
Authenticated and Public.
Hope that helps whoever stumbles upon this question.

Using GraphQL strictly as a query language

I think that my problem is a common one, and I'm weighing the costs and benefits of GraphQL as a solution.
I work on a product whose data is stored by a monolithic CRUD-based REST API. We have components of our application expose a search interface for data, and of course need some kind of server-side support for making requests for that data. This could include sorting, filtering, choosing fields, etc. There are, of course, more traditional ways of providing these functions in a REST context, like query parameter add-ons for endpoints, but it would be cool to try out GraphQL in this context to build a foundation for expanding its use for querying a bit.
GraphQL exposes a really nice query language for searching on data, and ultimately allows me to tailor the language of search specifically to my domain. However, I'm not sure if there is a great way to leverage the IDL without managing a separate server altogether.
Take the following Java Jersey API Proof-of-Concept example:
#GET
#Path("/api/v1/search")
public Response search(QueryIDL query) throws IOException {
final SchemaParser schemaParser = new SchemaParser();
TypeDefinitionRegistry typeDefinitionRegistry = // load schema
RuntimeWiring runtimeWiring = // wire up data-fetching classes
SchemaGenerator schemaGenerator = new SchemaGenerator();
GraphQLSchema graphQLSchema =
schemaGenerator.makeExecutableSchema(typeDefinitionRegistry, runtimeWiring);
GraphQL build = GraphQL.newGraphQL(graphQLSchema).build();
ExecutionResult executionResult = build.execute(query.toString());
return Response.ok(executionResult.getData()).build();
}
I am just planning to take a request body into my Jersey server that looks exactly like the request that would be sent to a GraphQL server. I'm then leveraging some library support to interpret and execute the request for data.
Without really thinking too much about everything that could go wrong, it looks like a client would be able to use this API similar to the way they would use a GraphQL server, except that I don't need to necessarily manage a separate server just to facilitate my search requirements.
Does it seem valuable, or silly, to use the GraphQL IDL in an endpoint-based context like this?
Apart from not needing to rebuild the schema or the GraphQL instance on each request (there are cases where you may want to rebuild the GraphQL instance, but your case is not the one), this is pretty much the canonical way of using it.
It is rather uncommon to keep a separate server for GraphQL, and it usually gets introduced exactly the way you described - as just another endpoint next to your usual REST endpoints. So your usage is legit - not silly at all :)
Btw, I'm not sure what would QueryIDL be... the query is just a string. No need for a special class.

meteor: use different database for each user

I currently assign a mongodb to my meteor app using the env variable
"MONGO_URL": "mongodb://localhost:27017/dbName" when I start the meteor instance.
So all data gets written to the mongo database with the name "dbName".
I am looking for a way to individually set the dbName for each custumer upon login in order to seperate their data into different databases.
This generally unsupported as this is defined at startup. However, this thread offers a possible solution:
https://forums.meteor.com/t/switch-database-while-meteor-is-running/4361/6
var database = new MongoInternals.RemoteCollectionDriver("<mongo url>");
MyCollection = new Mongo.Collection("collection_name", { _driver: database });
This would allow you to define the database name in the mongo url but would require a fair bit of extra work to redefine your collections on a customer by customer basis.
Here's another approach that will make your life eternally easier:
Create a generic site with no accounts at mysite.com
When they login at mysite.com, figure out what site they actually belong to and redirect them to customerName.mysite.com and log them in there
Run a separate instance of Meteor configured for a different mongo at each site
nginx might help you with the above.
It is generally good practice to run separate DBs when offering a B2B
solution.
That's a matter of opinion that depends heavily on the platform. Many SaaS providers would argue that point.

Make Orion fetch data from Cosmos and publish

I have set up a subscription between Orion ContextBroker and Cosmos BigData using Cygnus, and data is properly persisted in Cosmos when an update is made to Orion.
But I want to analyze the data in Cosmos and return the results to Orion, and finally access the result data in Orion from "outside".
How would one do this? Of course, I would like the solution I build to be as "automated" as possible, but mostly I just want to solve this problem.
Any advise is much appreciated!
As general response (as also the question is very general ;), what you need is a process that access to information stored in Cosmos (either using HDFS APIs -such as WebHDFS or HttpFs-, Hive queries, general MapReduce jobs on top of Hadoop, etc.), then implement the client side of the NGSI API that Orion implements in order to inject context elements into Orion based in the information you retrieved from Cosmos. The key operation to do so in the Orion API is updateContext.
The automation degree would depend on how you implement that process. It can be as automated as you want.
EDIT: considering this answer comments, I will try to add more detail.
What I mean is to develop a piece of software (let's call it APOS -A Piece Of Software) implementing the following behaviour:
APOS will grab data from Cosmos any of the interfaces provided by Cosmos, i.e. WebHDFS/HttpFs, Hive, MapReduce jobs, etc.
APOS will process the data to produce some result
APOS will inject that result in Orion, using the Orion REST API described in the Orion user manual. It is particularly useful for that task the updateContext operation. From a client-server point of view, Orion is a server exposing a REST API and APOS is the client interacting with that server.
It is completely up to you how to implement this APOS and how orchestrate the flow from 1 to 3 (e.g. it can run in batch mode all midnights, be triggered by user interaction on a web portal, etc.).
At the present moment, FI-WARE doesn't provide any generic enabler to convert from Cosmos data to NGSI given that each particular realization of the steps 1 to 3 above is different and depends on the use case. However, note that there is software component named Cygnus which implements the other way: from NGIS to Cosmos.