I’ve spent several hours scouring forums about how I can avoid the dreaded Your token is invalid. It might have expired or you might be using a token from a different project., but to no avail.
My setup:
Using prisma generate to make a Prisma client and connect it with
graphql-yoga Prisma service running on Heroku
Prisma service is pointed to an Amazon RDS instance
This setup works when I’m not using the secret property in my prisma.yml. However, when I add something like secret: mysecret and prisma deploy the service, then use something like prisma playground to play with the service, I get the above error. This also happens when I manually generate a token using prisma token and use it in an HTTP Authorization header.
What am I missing to make this work?
Make sure that you add the secret to your environment variables. E.g. via a .env file:
PRISMA_SECRET="mysecret"
When running prisma cli commands make sure to first set the environment variables. E.g. via something like dotenv:
dotenv -- prisma admin
For more information check out: https://www.prisma.io/docs/prisma-server/authentication-and-security-kke4/#prisma-services
Related
I am currently working on a solution that centralizes connections to external databases.
For that, I initialize a database with flyway and connect to other postgresql sources with the postrgres_fdw extension to create my table projections (with foreign tables).
Everything works fine if I put my credentials, hosts, etc... in my application.yml configuration file (we are on spring), putting these values in placeholders and reusing them in my sql migration scripts. But we want to fetch this data from vault, where we store all this data.
However, although I have followed the flyway documentation on vault integration (https://flywaydb.org/blog/integrating-vault-to-secure-flyway-parameters), I cannot achieve my goal. I've tried putting my placeholders in vault (my secrets are of the form flyway.placeholders.[...]=), and connecting to my instance via my data in my configuration file
spring:
flyway:
vault:
url: https://localhost:8200/v1/
token: root
secrets: secret/data/...
but without success. Has anyone had this problem before? Is it possible to retrieve any secret value via placeholders to use in sql scripts or do we have to go through the java API to have a little more flexibility?
I deploy a web app to a local cloudfoundry environment. As a database service for my DEV environment I have chosen a Marketplace service google-cloudsql-postgres with the plan postgres-db-f1-micro. Using the Web UI I created an instance with the name myapp-test-database and mentioned it in the CF Manifest:
applications:
- name: myapp-test
services:
- myapp-test-database
At first, all is fine. I can even redeploy the existing artifact. However, when I build a new version of my app and push it to CF, the injected credentials are updated and the app can no longer access the tables:
PSQLException: ERROR: permission denied for table
The tables are still there, but they're owned by the previous user. They were automatically created by the ORM in the public schema.
While the -OLD application still exists I can retrieve the old username/password from the CF Web UI or $VCAP_SERVICES and drop the tables.
Is this all because of Rolling App Deployments? But then there should be a lot of complaints.
If you are strictly doing a cf push (or restart/restage), the broker isn't involved (Cloud Controller doesn't talk to it), and service credentials won't change.
The only action through cf commands that can modify your credentials is doing an unbind followed by a bind. Many, but not all, service brokers will throw away credentials on unbind and provide new, unique credentials for a bind. This is often desirable so that you can rotate credentials if credentials are compromised.
Where this can be a problem is if you have custom scripts or cf cli plugins to implement rolling deployments. Most tools like this will use two separate application instances, which means you'll have two separate bindings and two separate sets of credentials.
If you must have one set of credentials you can use a service key to work around this. Service keys are like bindings but not associated with an application in CloudFoundry.
The downside of the service key is that it's not automatically exposed to your application, like a binding, through $VCAP_SERVICES. To workaround this, you can pass the service key creds into a user-provided service and then bind that to your application, or you can pass them into your application through other environment variables, like DB_URL.
The other option is to switch away from using scripts and cf cli plugins for blue/green deployment and to use the support that is now built into Cloud Foundry. With cf cli version 7+, cf push has a --strategy option which can be set to rolling to perform a rolling deployment. This does not create multiple application instances and so there would only ever exist one service binding and one set of credentials.
Request a static username using the extra bind parameter "username":
cf bind-service my-app-test-CANDIDATE myapp-test-database -c "{\"username\":\"myuser\"}"
With cf7+ it's possible to add parameters to the manifest:
applications:
- name: myapp-test
services:
- name: myapp-test-database
parameters: { "username": "myuser" }
https://docs.cloudfoundry.org/devguide/services/application-binding.html#arbitrary-params-binding
Note: Arbitrary parameters are not supported in app manifests in cf CLI v6.x. Arbitrary parameters are supported in app manifests in cf CLI v7.0 and later.
However, I can't find the new syntax here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block . The syntax I use comes from some other SO question.
Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.
I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE
I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.
Currently I am using keycloak on postgres db. and the db creds are provided to environment files. Wanted to know how I can make keycloak obtain the db creds from keyvault something like Azure keyvault ? Is there any documentation / guideline around it?
As per the official documentation ,some part already done but look like still work in progress
To use a vault, a vault provider must be registered within Keycloak.
It is possible to either use a built-in provider described below or
implement your own provider. See the Server Developer Guide for more
information.
To obtain a secret from a vault instead of entering it directly, enter the following specially crafted string into the appropriate field: ${vault.entry-name} where you replace the entry-name with the name of the secret as recognized by the vault.
https://www.keycloak.org/docs/latest/server_admin/#_vault-administration
https://issues.redhat.com/browse/KEYCLOAK-3205
Velero is installed in the cluster. At the installation velero was given credentials to s3 provider with --secret-file parameter and everything works fine.
Now I would like to create a new backup-location which will use buckets from a different s3 provider. When creating a backup location I pass to --config a key/value pair of s3Url=... But I can't seem to find a way to add a way to pass credentials to the new a3 provider. The --secret-file flag that worked at install is not accepted.
As the result when calling backup create with the new --storage-location the backup is created as failed.
My question is how I can give velero a way to authenticate itself with the new s3 provider? Is it even possible to create a new backup-location using s3 provider different that that used at velero install?
As the documentation says:
Velero only supports a single set of credentials per provider. It's
not yet possible to use different credentials for different locations,
if they're for the same provider.
So I am afraid it is not possible at the moment to add new credentials.
As far as I can see from Velero Issue Tracker, they are working on fixing that issue, however it still hasn't been properly implemented.