I'm writing some integration tests for my RESTful API that uses Prisma within.
As part of those tests I'd like to work with a DB that has a known state - an empty one is simplest.
AFAICS The JavaScript/TypeScript API lacks an equivalent of npx prisma db push --force-reset. How does the community recommend I tackle this requirement in code, w/o creating a child process?
Thanks!
Related
For my first backend project, I decided to create a Content Management System (CMS) with NextJS as my full-stack framework and Prisma as the ORM. As a principle, a customizable CMS must be able to programmatically create, drop and modify tables from the database; however, to my (scarce) knowledge of Prisma (and DBs in general) this can only be achieved with prisma migrate dev or prisma db push. According to the documentation, both these CLI tools should not, under any circumstances, be used in production.
I've tried to ignore the docs' warnings and run the prisma migrate dev programmatically with execSync(), but it knows when it is running under a non-interactive environment, shutting it down. Even if I was successful, it does not feel right.
This leads me to believe there's another way to manage these tables, but I can't seem to find it. The only alternative that comes to mind is to use raw SQL, which is absolutely possible but it doesn't look right, given Prisma is such a robust ORM tool.
My question then is: how can I programmatically and safely manage relational database tables using Prisma in production?
How the Heroku CLI tool manages DBs. What are the APIs they use? The tasks I am trying to do from the app are create/delete a postgres DB, create a dump, and import a dump using python code and not from the console or cli.
There is no publicly defined API for the Heroku Data products, unfortunately. That said, in my experience, the paths are fairly stable and can mostly be reasoned out. This CLI plugin might give you a head start on trying to work out the routes you'd need to hit in order to achieve your goals.
I'm working on creating a kogito bpm spring boot project. I'm very happy to see reduced level of complexity on integration on jbpm in spring boot with the help of KOGITO. I'm struggling to find answers for my question, so posting them here,
Kogito is a open source cloud offering for jbpm. I'm I correct?
I see only mongodb or infinispan can only be used or supported with Kogito. I can't integrate psql with kogito. I'm I correct?
I successfully created the spring boot kogito mongodb project and when I placed a .bpmn file in the resource folder, automatically endpoints got created. I was able to access, run the process and get response. But I don't see any entries created in MONGODB :| I don't even see the table being created. The .bpmn contains a simple hello world flow with start+scripttask+end nodes. Please explain help me understand this. Is the RuntimeMangar configured for per request stratergy? How can I change it?
Answers inline.
Kogito is a open source cloud offering for jbpm. I'm I correct?
Kogito is open-source and has jBPM integrated into its codebase to run on a cloud-native environment. In addition, a lot has been made to make it also run with native compilation if used with Quarkus.
I see only mongodb or infinispan can only be used or supported with Kogito. I can't integrate psql with kogito. I'm I correct?
To this date, Kogito has the following add-ons to support persistence:
Infinispan
Postgres
MongoDB
JDBC (so you can extend to support any database you wish)
See more about it here https://docs.jboss.org/kogito/release/latest/html_single/#con-persistence_kogito-developing-process-services.
But I don't see any entries created in MONGODB
Do you mind sharing a reproducer? Have you taken a look at the examples in https://github.com/kiegroup/kogito-examples/tree/stable/process-mongodb-persistence-springboot? This example shows a call to a sub-process that relies on a user task. Hence the process must be persisted to fire up again on a new request to solve the task. However, since your process starts and ends in one request, there's nothing to be persisted in the DB:
Runtime persistence is intended primarily for storing data that is required to resume workflow execution for a particular process instance. Persistence applies to both public and private processes that are not yet complete. Once a process completes, persistence is no longer applied. This persistence behavior means that only the information that is required to resume execution is persisted.
Currently, I'm creating a project that incorporates the MEAN stack, Docker, and Travis CI. I'm using Travis CI to automate builds for unit testing, integration testing, etc. I'm using Docker to help create a test environment. I've already successfully created unit tests thanks to resources via Medium. However, I haven't found many resources on writing integration tests for a MEAN application. I want to create tests to see if I get expected values in the Angular application when it connects to the REST API endpoints from Express, and the Express application is connected to a MongoDB server. Does anyone have any resources or advice on how to write these tests, and to execute them in a Dockerized test environment?
Having done something similar myself, just a piece of advice.
Test the services independently, like e2e tests for the api server, mail service for the frontend web app. If the selenium tests run alright with the webpage/app, and the api end point is on the local machine then everything looks to be working. There is nothing magic in docker. Your local configs should reflect what you're trying to test, and avoid overcomplicating things and write the testing yourself.
Tools often take more time to learn than the actual thing you're trying to acomplish if you do it yourself. Document it adequatly so the consumer of the container can replicate with minimal effort.
It's actually pretty hard, good luck.
I guess I should have thought of this before I started my project but I have successfully built and tested a mini application using the code-first approach and I am ready to deploy it to a production web server.
I have moved the folder to my staging server and everything works well. I am just curious if there is a suggested deployment strategy?
If I make a change to the application I don't want to lose all the data if the application is restarted.
Should I just generate the DB scripts from the code-first project and then move it to my server that way?
Any tips and guide links would be useful.
Thanks.
Actually database initializer is only for development. Deploying such code to production is the best way to get some troubles. Code-first currently doesn't have any approach for database evolution so you must manually build change scripts to your database after new version. The easiest approach is using Database tools in VS Studio 2010 Premium and Ultimate. If you will have a database with the old schema and a database with the new schema and VS will prepare change script for you.
Here are the steps I follow.
Comment out any Initialization strategy I'm using.
Generate the database scripts for schema + data for all the tables EXCEPT the EdmMetadata table and run them on the web server. (Of course, if it's a production server, BE CAREFUL about this step. In my case, during development, the data in production and development are identical.)
Commit my solution to subversion which then triggers TeamCity to build, test, and deploy to the web server (of course, you will have your own method for this step, but somehow deploy the website to the web server).
You're all done!
The Initializer and the EdmMetadata tables are needed for development only.