SAML configuration in Kubernetes deployment file - kubernetes

I need to use theSAML 2.0 Authentication (https://www.bookstackapp.com/docs/admin/saml2-auth/) in the Kubernetes deployment file of BookStack.
Is it possible to configure the variables from the above link in the Kubernetes deployment file?
Thanks in advance.

Look into this example - seems it was created just for you :)
https://github.com/BookStackApp/BookStack/issues/1776
Just use own variables in the deployment file from saml2-auth page
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "bookstack-test-x5"
namespace: "default"
labels:
app: "bookstack-test-x5"
spec:
replicas: 1
selector:
matchLabels:
app: "bookstack-test-x5"
template:
metadata:
labels:
app: "bookstack-test-x5"
spec:
containers:
- name: "bookstack-sha256"
image: "gcr.io/<PATH_TO_MY_CONTAINER>"
env:
- name: "DB_USER"
value: "bookstack22"
- name: "DB_HOST"
value: "127.0.0.1"
- name: DB_PORT
value: "3306"
- name: DB_DATABASE
value: "bookstack"
- name: "DB_PSWD"
value: "my_secret_passowrd"
- name: "APP_DEBUG"
value: "true"
- name: CACHE_DRIVER
value: "database"
- name: SESSION_DRIVER
value: "database"

Related

Kubernetes init container hanging (Init container is running but not ready)

I am facing an weird issue in kubernetes yaml file with initContainers. It shows that my initContainer is successfully running but it is in not ready state and it remains forever. There are no errors in initcontainer logs and logs shows success result .Am i missing anything ?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: graphql-engine
strategy: {}
template:
metadata:
labels:
io.kompose.service: graphql-engine
spec:
initContainers:
# GraphQl
- env:
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: HASURA_GRAPHQL_LOG_LEVEL
value: debug
- name: HASURA_GRAPHQL_UNAUTHORIZED_ROLE
value: public
- name: PVX_MLCRAFT_ACTIONS_URL
value: http://pvx-mlcraft-actions:3010
image: hasura/graphql-engine:v2.10.1
name: graphql-engine
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
containers:
- env:
- name: AUTH_CLIENT_URL
value: http://localhost:3000
- name: AUTH_EMAIL_PASSWORDLESS_ENABLED
value: "true"
- name: AUTH_HOST
value: 0.0.0.0
- name: AUTH_LOG_LEVEL
value: debug
- name: AUTH_PORT
value: "4000"
- name: AUTH_SMTP_HOST
value: smtp.gmail.com
- name: AUTH_SMTP_PASS
value: fahkbhcedmwolqzp
- name: AUTH_SMTP_PORT
value: "587"
- name: AUTH_SMTP_SENDER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_SMTP_USER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_WEBAUTHN_RP_NAME
value: Nhost App
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_GRAPHQL_URL
value: http://graphql-engine:8080/v1/graphql
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: POSTGRES_PASSWORD
value: postgres
image: nhost/hasura-auth:latest
name: auth
ports:
- containerPort: 4000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
type: LoadBalancer
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: auth
spec:
ports:
- name: "4000"
port: 4000
targetPort: 4000
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
Init Container expected to be in ready state
The Status field of the initContainer is not relevant here. What you need is that your initContainer is deterministic. Currently your initContainer is running, because the used image is built to run indefinite.
Initcontainers need to built that they run their process and then exit with an exitcode 0. Graphql-engine on the other hand is a container that will run indefinite and provide an API.
What are you trying to accomplish with this graphql-engine pod?

Kubernetes job to update the environment variables in pod

I want to update/inject existing/old environment variables in a pod. How can I do via kubernetes job
This is my daemon set :
apiVersion: apps/v1
kind: DaemonSet
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: aws-node
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2021-08-30T22:02:00+08:00"
creationTimestamp: null
labels:
k8s-app: aws-node
spec:
containers:
- env:
- name: DISABLE_METRICS
value: "false"
- name: ENABLE_POD_ENI
value: "false"
- name: WARM_ENI_TARGET
value: "1"
- name: WARM_IP_TARGET
value: "5"
name: aws-node
initContainers:
- env:
- name: DISABLE_TCP_EARLY_DEMUX
value: "false"
- name: AWS_VPC_K8S_CNI_EXTERNALSNAT
value: "true"
I want to make ENABLE_POD_ENI as true in main container and add WARM_IP_TARGET as 1 in init container
How can I do that via k8s job

how to use service name inside kubernetes pod

i want to replace this two value (...*** = #IP)
enter image description here
with service name
enter image description here
Any solution plz ?
this is my service his name is api
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
targetPort: 8080
protocol: TCP
name: api
selector:
app: api
and this is my deployement
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: api
name: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: einstore/einstore-core:0.1.1
env:
- name: APICORE_STORAGE_LOCAL_ROOT
value: "/home/einstore"
- name: APICORE_SERVER_NAME
value: "Einstore - Enterprise AppStore"
- name: APICORE_SERVER_MAX_UPLOAD_FILESIZE
value: "50"
- name: APICORE_DATABASE_HOST
value: "postgres"
- name: APICORE_DATABASE_USER
value: "einstore"
- name: APICORE_DATABASE_PASSWORD
value: "einstore"
- name: APICORE_DATABASE_DATABASE
value: "einstore"
- name: APICORE_DATABASE_PORT
value: "5432"
- name: APICORE_DATABASE_LOGGING
value: "false"
- name: APICORE_JWT_SECRET
value: "secret"
- name: APICORE_STORAGE_S3_ENABLED
value: "false"
- name: APICORE_STORAGE_S3_BUCKET
value: "~"
- name: APICORE_STORAGE_S3_ACCESS_KEY
value: "~"
- name: APICORE_STORAGE_S3_REGION
value: "~"
- name: APICORE_STORAGE_S3_SECRET_KEY
value: "~"
- name: APICORE_SERVER_URL
value: "http://**.***.*.***:30001/"
when i try to replace the *** with my machine #ip Everything works,But what I need is to change that so that there is the name of the service so that I can deploy the app in any other machine
the first solution LGTM but i get this error
curl http://api.einstore:8080/
curl: (6) Could not resolve host: api.einstore
NB: einstore= the name of my namespace

Kubernetes ConfigMap to write Node details to file

How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.

Error creating replicaset - unknown unknown field "replicas" in io.k8s.api.apps.v1.ReplicaSet

Team, I am trying to create a replica set but getting error as
error validating data:
[ValidationError(ReplicaSet): unknown field "replicas" in
io.k8s.api.apps.v1.ReplicaSet, ValidationError(ReplicaSet): unknown
field "selector" in io.k8s.api.apps.v1.ReplicaSet,
ValidationError(ReplicaSet.spec): missing required field "selector" in
io.k8s.api.apps.v1.ReplicaSetSpec]; if you choose to ignore these
errors, turn validation off with --validate=false
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: test-pod-10sec-via-rc1
labels:
app: pod-label
spec:
template:
metadata:
name: test-pod-10sec-via-rc1
labels:
app: feature-pod-label
namespace: test-space
spec:
containers:
- name: main
image: ubuntu:latest
command: ["bash"]
args: ["-xc", "sleep 10"]
volumeMounts:
- name: in-0
mountPath: /in/0
readOnly: true
volumes:
- name: in-0
persistentVolumeClaim:
claimName: 123-123-123
readOnly: true
nodeSelector:
kubernetes.io/hostname: node1
replicas: 1
selector:
matchLabels:
app: feature-pod-label
You have indentation issue in your yaml file, the correct yaml will be:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: test-pod-10sec-via-rc1
labels:
app: pod-label
spec:
template:
metadata:
name: test-pod-10sec-via-rc1
labels:
app: feature-pod-label
namespace: test-space
spec:
template:
spec:
containers:
- name: main
image: ubuntu:latest
command: ["bash"]
args: ["-xc", "sleep 10"]
volumeMounts:
- name: in-0
mountPath: /in/0
readOnly: true
volumes:
- name: in-0
persistentVolumeClaim:
claimName: 123-123-123
readOnly: true
nodeSelector:
kubernetes.io/hostname: node1
replicas: 1
selector:
matchLabels:
app: feature-pod-label