I'm new at prisma 2 but have got a database working. I have used prisma 'init' and 'migrate dev' to create database tables for my model and can interact with the database using the Prisma client - prisma 2.22.1
Usually for a project, I'd have dev, test and prod environments and use env-cmd to set the relevant differences, e.g. connection details for getting to the database.
With prisma 2 however, it seems like there's a single .env file that is used for the database connection details, so I cannot see how to proceed for the different environments.
Note that I'm not meaning different types of database - in this example all are postgresql.
I can see possibilities for getting past this hurdle, for example for the script to write a suitable .env file according to the required environment as part of running the app, but 'not ideal' really doesn't give this idea the review that it deserves. Or getting more computers.
Any suggestions please for using different databases from the same project? Am I missing something basic or is it deliberately blocked?
Related
I've nearly got a SpringBoot app running on Google Cloud services that is connected to a Postgres instance.
I've ran through the steps on their guide, located here and have gotten to the point where I need to set my variables up for the app to find the database instance:
The problem encountered is two fold:
I don't know where and how to set these
My server logs report this error:
meaning that the Spring application is trying to find the database like it would on my local.
I set the following values in the app.yaml (assuming this is where they should be?)
runtime: java11
instance_class: F1
env_variables:
SQL_USER: quickstart-user
SQL_PASSWORD: <password>
SQL_DATABASE: quickstart_db
INSTANCE_CONNECTION_NAME: quickstart-instance
So, my question(s) are:
Is this the correct place to set them?
If not, do I need a appengine-web.xml file instead (And does anyone have an example of what this looks like, I can't find one)
How do I stop the app from looking for the local database?
Thanks
I'm making an api in which I separate my requests by responsibilities, services, repository and routes, I'm using typeorm to connect to the database, and I'm trying to implement unit tests to test my application so I have to unclip my code from the typeorm of so that my tests are not depending on it. However, at the time of decoupling, the typeorm is no longer able to connect with the bank, which is mongodb, returns the following error: ConnectionNotFoundError: Connection "mongo" was not found.
I've been reading about it and from what I understood the typeorm wouldn't give full support to the mongo, that's why it failed. I would like to confirm with you if that would be the case and what would be the best alternative to solve this.
MongoDB is supported by TypeORM (although I have not used it personally myself).
You can follow a short tutorial here.
What is not supported for Mongo is the typeorm "Migration" feature. For example see typeorm issue 6695. Migration is an advanced topic and you probably do not need it. But if you do, a workaround is on stackoverflow answer here.
But this Migration issue has nothing to do with your error. Almost certainly, your getConnection() call or ormconfig.json are incorrect. Start a with simple project based on the TypeORM Mongo tutorial, nothing else, verify you can connect to MongoDB, and work up from there.
I have deployed various versions of the app previously with no errors.
However, I had to make some modifications on two tables in my AWS RDS running PostgreSQL. alter table NAME alter column NAME type date using to_date(...)
I've done that directly on the AWS RDS and modified SQLAlchemy column accordingly. date = Column(Date)
Any API call that sends queries these two tables returns Internal Server Error - In the meantime, there are no errors in deploying this version of the app and moreover from my python interpreter running any SQLAlchemy code that queries any of these two tables always returns what's expected.
I have tried several ways to fix this. The only one that worked is to remove the line date = Column(Date) from SQLAlchemy setup file - Then there was no 500 on any API call but of course that doesn't help as I need that column!
Any help on this will be highly appreciated really...
I have created an SQLDB service instance and bound it to my application. I have created some tables and need to load data into them. If I write an INSERT statement into RUN DDL, I receive a SQL -104 error. How can I INSERT SQL into my SQLDB service instance.
If you're needing to run your SQL from an application then there are several examples (sample code included) of how to accomplish this at the site listed below:
http://www.ng.bluemix.net/docs/services/SQLDB/index.html#run-a-query-in-java
Additionally, you can execute SQL in the SQL Database Console by navigating to Manage -> Work with Database Objects. More information can be found here:
http://www.ng.bluemix.net/docs/services/SQLDB/index.html#sqldb_005
s.executeUpdate("CREATE TABLE MYLIBRARY.MYTABLE (NAME VARCHAR(20), ID INTEGER)");
s.executeUpdate("INSERT INTO MYLIBRARY.MYTABLE (NAME, ID) VALUES ('BlueMix', 123)");
Full Code
Most people do initial database population or migrations when they deploy their application. Often these database commands are programming language specific. The poster didn't include the programming language. You can accomplish this two ways.
Append a bash script that would call your database scripts that you uploaded. This project shows how you can call that bash script from within your manifest file as part of doing a CF Push.
Some languages like offer a file type or service that will automatically get used to populate the database on initial deploy or when your migrate/synch the db. For example Python Django offers a "fixtures" file that will automatically take a JSON file and populate your database tables
The application I develop is deployed to severeal environments (development, test, staging, production).
While developing I created the entity model from the existing development-database. Everything works fine, but as I wanted to put the application onto the test-environment, I realized the following problem:
The structure of the database is identical in all environments, but the database schema changes from environment to environment. For example there's a Customers table in every database. On my local dev machine it has the schema dbo ([dbo].[Customers]), but in the test environment the schema is test ([test].[Customers]), whilst the schema is stag in the staging environment ([stag].[Customers]) and so forth.
So when I deploy the application in the test environment, it gets no data from the database, because the entity framework expects the data to be found in [dbo].[Customers] but there is no such table, there is just a [test].[Customers].
I know, that I can define a schema other than dbo, but this doesn't help me, because I need a different schema depending on the deployment environment.
Any suggestions?
Somehow I think I'll be ending up, asking my DB admin to change the schema to dbo in every database in each environment...
If you are using code first you have to use fluent API approach from linked question and load current schema from configuration file (you will have to modify configuration per each deployment).
If you are using ObjectContext with EDMX you can use Model adapter. Other way which works with DbContext as well is storing EF metadata in files and executing some code which will change schema in ssdl file at application startup.