Prisma post deployment issues - deployment

I'm trying to make my Prisma post deployment hook work but it for some reason it doesn't generate the prisma.graphql file
Has anyone experienced this before? I followed the official guidelines for this.
prisma.yml
datamodel: datamodel.prisma
endpoint: ${env:PRISMA_ENDPOINT}
secret: ${env:PRISMA_SECRET}
hooks:
post-deploy:
- graphql get-schema --project prisma
.graphqlconfig.yml
projects:
app:
schemaPath: "src/schema.graphql"
extensions:
endpoints:
default: "http://localhost:4444"
prisma:
schemaPath: "src/generated/prisma.graphql"
extensions:
prisma: prisma.yml
my endpoint is the demo server's endpoint on prisma's website
The result i'm getting when i run the deploy command is:
post-deploy:
Running graphql get-schema --project prisma ✔

here is a workaround that will generate prisma.graphql and automatically update it after prisma deploy
generate:
- generator: graphql-schema
output: ./src/generated/
hooks:
post-deploy:
- graphql get-schema -p prisma
- prisma generate

Since you are running graphql get-schema --project prisma as a post-deploy hook, it's not showing the errors for that command. Try to put it as a npm script in package.json and run. See what's the error in there. Most probably the issue will be a mismatch of graphql package version. If that's the problem, add the following snippet to your package.json file and re-run npm install or yarn install:
"resolutions": {
"graphql": "^14.0.2"
},

Related

Google Cloud, GitHub pipe with Google Cloud Run

I´m trying to a deploy pipe with Github and Google Cloud using cloud run cause´ i´m using docker containers in the server, this is my GitHub action code (workflow)
name: Build and Deploy to Cloud Run
on:
push:
branches:
- master
env:
PROJECT_ID: ${{ secrets.RUN_PROJECT }}
RUN_REGION: us-west2-a
SERVICE_NAME: helloworld-python
jobs:
setup-build-deploy:
name: Setup, Build, and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# Setup gcloud CLI
- name: Set up Python
uses: actions/setup-python#v4
with:
python-version: '3.9'
- uses: google-github-actions/setup-gcloud#v0
with:
version: '390.0.0'
service_account_email: ${{ secrets.ACC_MAIL }}
service_account_key: ${{ secrets.RUN_SA_KEY }}
project_id: ${{ secrets.RUN_PROJECT }}
# Build and push image to Google Container Registry
- name: Build
run: |-
gcloud builds submit \
--quiet \
--tag "gcr.io/$PROJECT_ID/$SERVICE_NAME:$GITHUB_SHA"
# Deploy image to Cloud Run
- name: Deploy
run: |-
gcloud run deploy "$SERVICE_NAME" \
--quiet \
--region "$RUN_REGION" \
--image "gcr.io/$PROJECT_ID/$SERVICE_NAME:$GITHUB_SHA" \
--platform "managed" \
--allow-unauthenticated
Everything seems to be "correct" but the moment I run the workflow, this error appears
ERROR: (gcloud.builds.submit) The required property [project] is not currently set.
You may set it for your current workspace by running:
$ gcloud config set project VALUE
or it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT]
The proyect ID is in the RUN_PROYECT secret, I don´t know what else to do
Is there any problem that is not letting the thing work?
Edited: Changing the version to 390.0.0 worked, but now I´m receiving this error
ERROR: (gcloud.builds.submit) Invalid value for [source]: Dockerfile required when specifying --tag
For the first error:
ERROR: (gcloud.builds.submit) The required property [project] is not currently set.
You may set it for your current workspace by running:
$ gcloud config set project VALUE
or it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT]
the gcloud command has not been properly configured.
According to the Authorization section of google-github-actions/setup-gcloud:
This action installs the Cloud SDK (gcloud). To configure its authentication to Google Cloud, use the google-github-actions/auth action.
So, you need to configure it for authorization using any one of the supported methods there.
For your second error:
ERROR: (gcloud.builds.submit) Invalid value for [source]: Dockerfile required when specifying --tag
the /path/to/Dockerfile is missing. You need to specify it in gcloud builds submit command.
See this relevant SO thread for more details:
Specify Dockerfile for gcloud build submit

CircleCI cannot find Serverless Framework after serverless installation

I'm trying to use Serverless Compose to deploy multiple services to AWS via CircleCI. I have 3 test services for a POC, and so far deploying these to a personal AWS account from the terminal works just fine. However, when I configure it to go through CircleCI with a config.yml file, I get this error:
Could not find the Serverless Framework CLI installation. Ensure Serverless Framework is installed before continuing.
I'm puzzled because my config.yml file looks like this:
version: 2.1
orbs:
aws-cli: circleci/aws-cli#3.1.1
serverless-framework: circleci/serverless-framework#2.0.0
node: circleci/node#5.0.2
jobs:
deploy:
parameters:
stage:
type: string
executor: serverless-framework/default
steps:
- checkout
- aws-cli/install
- serverless-framework/setup
- run:
command: serverless config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
name: Configure serverless
- run:
command: npm install #serverless/compose
name: Install #serverless/compose
- run:
command: serverless deploy --stage << parameters.stage >>
name: Deploy staging
workflows:
deploy-staging:
jobs:
- node/test:
version: 17.3.0
- deploy:
context: aws-*******-developers
name: ******-sandbox-use1
stage: staging
The serverless framework is set up, the orb is present, but it says that it could not be found. All steps are successful until I get to deploy staging. I've been digging through documentation but I can't seem to find where it's going wrong with CircleCI. Does anyone know what I may be missing?
Turns out this required a weird fix, but it's best to remove the following:
The orb serverless-framework: circleci/serverless-framework#2.0.0
The setup step in the job - serverless-framework/setup
The Configure Serverless step
Once these are removed, modify the Install #serverless/compose step to run npm install and install all the packages. Then run npx serverless deploy instead of serverless deploy. This fixed the problem for me.

How can I setup more than one PRISMA service, one for testing and one for development

I want to have a separate database for testing and development. What I’m trying to achieve is to have more than one Prisma service, one for testing and one for normal development.
This my docker-compose.yml file
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: 'always'
ports:
- '4466:4466'
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: ${MONGO_CONNECTION_STRING}
prisma_testing:
image: prismagraphql/prisma:1.34
restart: 'always'
ports:
- '4400:4400'
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: ${TEST_MONGO_CONNECTION_STRING}
I can’t find anything in the docs for achieving this. Is there a recommended flow or config for achieving this?
The easiest way would be to change prisma's endpoint to point to a different prisma server before you run prisma deploy. As of the time of writing, prisma has been renamed from prisma to prisma1. You can find more details here, so ensure you have prisma1 installed as a devDependency.
Prisma explains how to achieve the same in their docs here.
Follow below steps:
Install prisma1 as a devDependency yarn add prisma1 -D,
Generate prisma client and specify config file to use npx prisma1 generate -p path/to/prisma.test.yml
Run your tests
In your config file(s), specify different endpoints pointing to different prisma servers, you might want to have one for testing and another for production.
Contents for the different config files might look as below:
prisma.test.yml (For running your local tests)
endpoint: http://127.0.0.1:4466
datamodel: datamodel.prisma
databaseType: document
secret: u4r4secret
generate:
- generator: javascript-client
output: ./generated/prisma-client/
prisma.yml (For production use)
endpoint: http://prod-server-ip:4466
datamodel: datamodel.prisma
databaseType: document
secret: u4r4secret
generate:
- generator: javascript-client
output: ./generated/prisma-client/
Also improtant, don't forget to regenerate prisma client before pushing your code to production. A quick and easy way would be to use git hooks.
Suggestion:
Use husky and add pre-commit hook, which will run before git commit, to always ensure your prisma client will always have production endpoint before you push to production. Add below section to package.json.
"husky": {
"hooks": {
"pre-commit": "yarn prisma:generate -p path/to/prisma.yml"
}
}

Setting up realms in Keycloak during kubernetes helm install

I'm trying to get keycloak set up as a helm chart requirement to run some integration tests. I can get it to bring it up and run it, but I can't figure out how to set up the realm and client I need. I've switched over to the 1.0.0 stable release that came out today:
https://github.com/kubernetes/charts/tree/master/stable/keycloak
I wanted to use the keycloak.preStartScript defined in the chart and use the /opt/jboss/keycloak/bin/kcadm.sh admin script to do this, but apparently by "pre start" they mean before the server is brought up, so kcadm.sh can't authenticate. If I leave out the keycloak.preStartScript I can shell into the keycloak container and run the kcadm.sh scripts I want to use after it's up and running, but they fail as part of the pre start script.
Here's my requirements.yaml for my chart:
dependencies:
- name: keycloak
repository: https://kubernetes-charts.storage.googleapis.com/
version: 1.0.0
Here's my values.yaml file for my chart:
keycloak:
keycloak:
persistence:
dbVendor: H2
deployPostgres: false
username: 'admin'
password: 'test'
preStartScript: |
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password 'test'
/opt/jboss/keycloak/bin/kcadm.sh create realms -s realm=foo -s enabled=true -o
CID=$(/opt/jboss/keycloak/bin/kcadm.sh create clients -r foo -s clientId=foo -s 'redirectUris=["http://localhost:8080/*"]' -i)
/opt/jboss/keycloak/bin/kcadm.sh get clients/$CID/installation/providers/keycloak-oidc-keycloak-json
persistence:
dbVendor: H2
deployPostgres: false
Also a side annoyance is that I need to define the persistence settings in both places or it either fails or brings up postgresql in addition to keycloak
I tried this too and also hit this problem so have raised an issue. I prefer to use -Dimport with a realm .json file but your points suggest a postStartScript option would make sense so I've included both in the PR on that issue
the Keycloak chart has been updated. Have a look at these PRs:
https://github.com/kubernetes/charts/pull/5887
https://github.com/kubernetes/charts/pull/5950

Gitlab + GKE + Gitlab CI unable to clone Repository

I'm trying to user GitLab CI with GKE cluster to execute pipelines. I have the experience using Docker runner, but GKE is still pretty new to me, here's what I did:
Create GKE cluster via Project settings in GitLab.
Install Helm Tiller via GitLab Project settings.
Install GitLab Runner via GitLab Project settings.
Create gitlab-ci.yml with the following content
before_script:
- php -v
standard:
image: falnyr/php-ci-tools:php-cs-fixer-7.0
script:
- php-cs-fixer fix --diff --dry-run --stop-on-violation -v --using-cache=no
lint:7.1:
image: falnyr/php-ci:7.1-no-xdebug
script:
- composer build
- php vendor/bin/parallel-lint --exclude vendor .
cache:
paths:
- vendor/
Push commit to the repository
Pipeline output is following
Running with gitlab-runner 10.3.0 (5cf5e19a)
on runner-gitlab-runner-666dd5fd55-h5xzh (04180b2e)
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image falnyr/php-ci:7.1-no-xdebug ...
Waiting for pod gitlab-managed-apps/runner-04180b2e-project-5-concurrent-0nmpp7 to be running, status is Pending
Running on runner-04180b2e-project-5-concurrent-0nmpp7 via runner-gitlab-runner-666dd5fd55-h5xzh...
Cloning repository...
Cloning into '/group/project'...
remote: You are not allowed to download code from this project.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#git.domain.tld/group/project.git/': The requested URL returned error: 403
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
Now I think that I should add a gitlab-ci-token user with password somewhere, not sure if it is supposed to work like this.
Thanks!
After reading more about the topic it seems that pipelines should be executed via HTTPS only (not SSH).
I enabled the HTTPS communication and when I execute the pipeline as the user in the project (admin that is not added to the project throws this error) it works without a problem.