Access secret environment variables from kubernetes in nextjs app - kubernetes

I'm trying to deploy a nextjs app to GCP with kubernetes. I'm trying to setup auth with nextAuth and keycloak provider. The issue I'm running into is that KEYCLOAK_CLIENT_SECRET and KEYCLOAK_CLIENT_SECRET can't be found, all of the other environment variables listed here show up under process.env.XX the two with secretKeyRefs come in undefined.
I have read in multiple places that they need to be available during build time for nextjs but I'm not sure how to set this up. We have several nodejs apps with this same deployment file and they are able to obtain the secrets.
This is what my deployment.yaml env section looks like.
Anyone have any experience setting this up?
I can't add the secrets to any local env files for security purposes.
env:
- name: NODE_ENV
value: production
- name: KEYCLOAK_URL
value: "https://ourkeycloak/auth/"
- name: NEXTAUTH_URL
value: "https://nextauthpath/api/auth"
- name: NEXTAUTH_SECRET
value: "11234"
- name: KEYCLOAK_REALM
value: "ourrealm"
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: vault_client
key: CLIENT_SECRET
- name: KEYCLOAK_CLIENTID
valueFrom:
secretKeyRef:
name: vault_client
key: CLIENT_ID

Related

Same secret for different services in k8s

I have a situation when I want to use one Opaque secret in different service
the only difference is that key should have different name:
f.e.
service1 should have env.variable named TOKEN and value SUperPassword111!
service2 should have env.variable named SRV__TOKEN and same value SUperPassword111!
Is it possible to use following secret for those those two service?
Here is the YAML for the secret
kind: Secret
apiVersion: v1
metadata:
name: some_secret
immutable: false
data:
TOKEN: U1VwZXJQYXNzd29yZDExMSEK
type: Opaque
The name of an environment variable is specified within the container-spec while the value is referenced with secretKeyRef which specifies the secret to use and the key within this particular secret.
In other words, the name of the environment variable and the key as used in a secret are entirely independent. So, if I understood your question correctly, the answer to it is; yes it is possible.
See https://kubernetes.io/docs/concepts/configuration/secret/ for a detailed explanation and a full example for referencing a secret from a pod.
Here a simple excerpt tailored to your question:
container-spec for "service1"
...
containers:
- name: service1
image: service1-image
env:
- name: TOKEN # the name of the env within your container
valueFrom:
secretKeyRef:
name: some_secret
key: TOKEN # the name as specified in the secret
...
container-spec for "service2"
...
containers:
- name: service1
image: service1-image
env:
- name: SRV__TOKEN # the name of the env within your container
valueFrom:
secretKeyRef:
name: some_secret
key: TOKEN # the name as specified in the secret
...

How can I check if a k8s secret exists in a Helm chart/k8s template, or use a default value?

I have a template part like:
spec:
containers:
- name: webinspect-runner-{{ .Values.pipeline.sequence }}
...
env:
- name: wi_base_url
valueFrom:
secretKeyRef:
name: webinspect
key: wi-base-url
- name: wi_type
valueFrom:
secretKeyRef:
name: webinspect
key: wi-type
The webinspect/wi_type secret may be missing. I want the container also don't have the wi_type envvar or get a default value (better) when the secret is missing, but k8s just reports CreateContainerConfigError: couldn't find key wi-type in Secret namespace/webinspect and the pod fails.
Is there a way to use a default value, or skip the block if the secret does not exist?
Two options, the first is add optional: true to the secretKeyRef block(s) which makes it skip. The second is a much more complex approach using the lookup template function in Helm. Probably go with the first :)

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"

Can an openshift template parameter refer to the project name in which it is getting deployed?

I am trying to deploy Kong API Gateway via template to my openshift project. The problem is that Kong seems to be doing some DNS stuff that causes sporadic failure of DNS resolution. The workaround is to use the FQDN (<name>.<project_name>.svc.cluster.local). So, in my template i would like to do:
- env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: "{APP_NAME}.{PROJECT_NAME}.svc.cluster.local"
I am just not sure how to get the current PROJECT_NAME of if perhaps there is a default set of available parameters...
You can read the namespace(project name) from the Kubernetes downward API into an environment variable and then use that in the value perhaps.
See the OpenShift docs here for example.
Update based on Claytons comment:
Tested and the following snippet from the deployment config works.
- env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: EXAMPLE
value: example.$(MY_POD_NAMESPACE)
Inside the running container:
sh-4.2$ echo $MY_POD_NAMESPACE
testing
sh-4.2$ echo $EXAMPLE
example.testing
In the environment screen of the UI it appears as a string value such as example.$(MY_POD_NAMESPACE)

For a node app, what's required in the deployment config to connect via cloud proxy?

Trying to connect to a 2nd gen cloud sql database from a GCP Container.
I created the cloud proxy, but am a bit confused on what my app needs to connect via the proxy. My app is looking to connect on 127.0.0.1:3306 already within the application with all the needed mysql connection information which works fine outside of GCP. My app is currently logging connection errors against 127.0.0.1:3306 when deployed on GCP container.
Error: connect ECONNREFUSED 127.0.0.1:3306 at Object.exports._errnoException
Any additional sample files available for a simple node app to better understand the needed application config?
The sample below seems to address what wordpress needs, but what do I need for simple node app?
https://github.com/GoogleCloudPlatform/container-engine-samples/blob/master/cloudsql/cloudsql_deployment.yaml
Related Link:
https://cloud.google.com/sql/docs/mysql/connect-container-engine
Provide 127.0.0.1:3306 as the host address your application uses to access the database.
I have this hard coded in my app.
Because the proxy runs in a second container in the same pod, it appears to your application as localhost, so you use 127.0.0.1:3306 to connect to it.
Right, I have this hard coded in my app
Provide the cloudsql-db-credentials secret to enable the application to log in to the database.
Ok, if I have to add this, what
For example, assuming the application expected DB_USER and DB_PASSWORD:
name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
If your proxy user requires a password, you would also add:
So what variable name would I be using here? Is this asking for the mysql user name?
name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
What variable is needed here? Is this asking for the mysql pw for the id above?
In the wordpress sample from the link above, I'm trying to figure out what variables are needed for a simple node app.
containers:
- image: wordpress:4.4.2-apache
name: web
env:
- name: WORDPRESS_DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
# Change the [PORT] to the port number used by your database
# (e.g. 3306).
value: 127.0.0.1:[PORT]
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
# [END cloudsql_secrets]
ports:
- containerPort: 80
name: wordpress
Thanks!