Why do I get "Error: JWT_KEY must be defined"? - jwt

I am trying to run a project that uses docker and kubernetes. When I try to run the application it gives me the following error:
> auth#1.0.0 start
> ts-node-dev src/index.ts
[INFO] 11:03:41 ts-node-dev ver. 2.0.0 (using ts-node ver. 10.8.2, typescript ver. 4.7.4)
Error: JWT_KEY must be defined
at E:\adv\shop_back\auth\src\index.ts:7:11
at Generator.next (<anonymous>)
at E:\adv\shop_back\auth\src\index.ts:8:71
at new Promise (<anonymous>)
at __awaiter (E:\adv\shop_back\auth\src\index.ts:4:12)
at start (E:\adv\shop_back\auth\src\index.ts:5:26)
at Object.<anonymous> (E:\adv\shop_back\auth\src\index.ts:25:1)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Module._compile (E:\adv\shop_back\auth\node_modules\source-map-support\source-map-support.js:521:25)
at Module.m._compile (C:\Users\A\AppData\Local\Temp\ts-node-dev-hook-2465378030590002.js:69:33)
[ERROR] 11:03:48 Error: JWT_KEY must be defined
I tried to create a JWT_KEY using the following command:
kubectl create secret generic jwt-secret --from-literal JWT_KEY=this_is_the_jwt_key
But it responses with a new error:
error: failed to create secret secrets "jwt-secret" already exists
I remember I have used the kubectl create secret ... command previously for testing another project( although with a different expression after =, like JWT_KEY=asdf).
I don't know where these keys get stored? Why the new program doesn't work if there is a JWT_KEY already stored?
PS: I use Windows 10 machine and Docker-Desktop app.
This is my index.js file:
import mongoose from "mongoose";
import { app } from "./app";
const start = async () => {
if (!process.env.JWT_KEY) {
throw new Error("JWT_KEY must be defined");
}
if (!process.env.MONGO_URI) {
throw new Error("MONGO_URI must be defined");
}
try {
await mongoose.connect(process.env.MONGO_URI, {});
console.log("Connected to MongoDb");
} catch (err) {
console.error(err);
}
app.listen(3000, () => {
console.log("Listening on port 3000!!!!!!!!");
});
};
start();
When I run kubectl get secrets I get:
kubectl get secrets
NAME TYPE DATA AGE
jwt-secret Opaque 1 38d
This is the deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: tester/auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000

Related

How to access Redis as a k8s service with NestJS TypeORM's cache server option?

I'd like to deploy my k8s with NestJS backend server and redis.
In order to remove user service from the core service of NestJS, I would like to run user service as a service of k8s, and use the cache server of user db referenced by the user service as a service in k8s.
To do that, I set up the user service's database config module like this.
import { Module } from '#nestjs/common'
import { TypeOrmModule, TypeOrmModuleAsyncOptions, TypeOrmModuleOptions } from '#nestjs/typeorm'
import { SnakeNamingStrategy } from 'typeorm-naming-strategies'
let DATABASE_NAME = 'test'
if (process.env.NODE_ENV) {
DATABASE_NAME = `${DATABASE_NAME}_${process.env.NODE_ENV}`
}
const DB_HOST: string = process.env.DB_HOST ?? 'localhost'
const DB_USERNAME: string = process.env.DB_USERNAME ?? 'user'
const DB_PASSWORD: string = process.env.DB_PASSWORD ?? 'password'
const REDIS_HOST: string = process.env.REDIS_HOST ?? 'localhost'
const databaseConfig: TypeOrmModuleAsyncOptions = {
useFactory: (): TypeOrmModuleOptions => ({
type: 'mysql',
host: DB_HOST,
port: 3306,
username: DB_USERNAME,
password: DB_PASSWORD,
database: DATABASE_NAME,
autoLoadEntities: true,
synchronize: true,
namingStrategy: new SnakeNamingStrategy(),
logging: false,
cache: {
type: 'redis',
options: {
host: REDIS_HOST,
port: 6379,
},
},
timezone: '+09:00',
}),
}
#Module({
imports: [
TypeOrmModule.forRootAsync({
...databaseConfig,
}),
],
})
export class DatabaseModule {}
And, to implement k8s I used a helm.
Helm's template folders are as follows.
- configmap
- deployment
- pod
- service
And, under those folders are as follows.
// configmap/redis.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
data:
redis-config: |
maxmemory 20mb
maxmemory-policy allkeys-lru
// deployment/user_service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
namespace: default
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: user-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: user-service
spec:
containers:
- image: {{ .Values.user_service.image }}:{{ .Values.user_service_version }}
imagePullPolicy: Always
name: user-service
ports:
- containerPort: 50051
protocol: TCP
env:
- name: COGNITO_CLIENT_ID
value: "some value"
- name: COGNITO_USER_POOL_ID
value: "some value"
- name: DB_HOST
value: "some value"
- name: DB_PASSWORD
value: "some value"
- name: DB_USERNAME
value: "some value"
- name: NODE_ENV
value: "test"
- name: REDIS_HOST
value: "10.100.77.0"
// pod/redis.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
name: redis
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
// service/user_service.yaml
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
clusterIP: 10.100.88.0
selector:
app: user-service
ports:
- protocol: TCP
port: 50051
targetPort: 50051
// service/redis.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
clusterIP: 10.100.77.0
selector:
app: redis
ports:
- name: redis
protocol: TCP
port: 6379
targetPort: 6379
With above yaml files, I install helm chart named test.
After installing, the result of kubectl get svc,po,deploy,configmap is like this.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 4d4h
service/user-service ClusterIP 10.100.88.0 <none> 50051/TCP 6s
service/redis ClusterIP 10.100.77.0 <none> 6379/TCP 6s
NAME READY STATUS RESTARTS AGE
pod/user-service-78548d4d8f-psbr2 0/1 ContainerCreating 0 6s
pod/redis 0/1 ContainerCreating 0 6s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/user-service 0/1 1 0 6s
NAME DATA AGE
configmap/kube-root-ca.crt 1 4d4h
configmap/redis-config 1 6s
But, when I checked the user-service's deploy logs, these error was occurred.
[Nest] 1 - 02/07/2023, 7:15:32 AM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (1)...
Error: connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
I also checked through the console log that the REDIS_HOST environment variable is 10.100.77.0 in the database config of user-service, but an error was appearing while referring to the local host as above.
Is there any error in the part I set?
you can use service for connect to Redis. for this use redis.redis as REDIS_HOST in your application.

Cannot use args to pass arguments to a pod

I am unable to pass arguments to a container entry point.
The error I am getting is:
Error: container create failed: time="2022-09-30T12:23:31Z" level=error msg="runc create failed: unable to start container process: exec: \"--base-currency=EUR\": executable file not found in $PATH"
How can I pass an arg with -- in it to the arg option in a PodSpec?
I have tried using args directly, and as environment variables, both don't work. I have also tried arguments without "--" - still no joy.
The DeploymentConfig is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: market-pricing
spec:
replicas: 1
selector:
matchLabels:
application: market-pricing
template:
metadata:
labels:
application: market-pricing
spec:
containers:
- name: market-pricing
image: quay.io/brbaker/market-pricing:v0.5.0
args: ["$(BASE)","$(CURRENCIES)", "$(OUTPUT)"]
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: "/app/config"
readOnly: true
env:
- name: BASE
value: "--base-currency=EUR"
- name: CURRENCIES
value: "--currencies=AUD,NZD,USD"
- name: OUTPUT
value: "--dry-run"
volumes:
- name: config
configMap:
name: app-config # Provide the name of the ConfigMap you want to mount.
items: # An array of keys from the ConfigMap to create as files
- key: "kafka.properties"
path: "kafka.properties"
- key: "app-config.properties"
path: "app-config.properties"
The Dockerfile is:
FROM registry.redhat.io/ubi9:9.0.0-1640
WORKDIR /app
COPY bin/market-pricing-svc .
CMD ["/app/market-pricing-svc"]
The ConfigMap is:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
kafka.properties: |
bootstrap.servers=fx-pricing-dev-kafka-bootstrap.kafka.svc.cluster.local:9092
security.protocol=plaintext
acks=all
app-config.properties: |
market-data-publisher=kafka-publisher
market-data-source=ecb
reader=time-reader

Unable to connect to the mongoDB using docker Kubernetes

I am trying to connect to the default mongo image in Kubernetes, I am running Desktop docker for windows, below is the YAML configuration for the mongo service which I am using in the server code to connect.
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27107
Trying to connect to the Kubernetes instance through the below code
import express from 'express'
import { json } from 'body-parser'
import mongoose from 'mongoose'
const app = express()
app.use(json())
const start = async () => {
console.log('Starting servers...!')
try {
await mongoose.connect('mongodb://auth-mongo-srv:27017/auth', {
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true
})
console.log('Connected to MongoDB !')
} catch (err) {
console.log(err)
}
app.listen(3000, () => {
console.log('Listening on port :3000 !')
})
}
start()
Services and pods details
PS D:\auth> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-mongo-srv ClusterIP 10.103.93.74 <none> 27017/TCP 5h27m
auth-srv ClusterIP 10.101.151.7 <none> 3000/TCP 5h27m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
PS D:\auth> kubectl get ep auth-mongo-srv
NAME ENDPOINTS AGE
auth-mongo-srv 10.1.0.15:27107 5h28m
getting below errors
MongooseServerSelectionError: connect ECONNREFUSED 10.103.93.74:27017
at NativeConnection.Connection.openUri (/app/node_modules/mongoose/lib/connection.js:846:32)
at /app/node_modules/mongoose/lib/index.js:351:10
at /app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5
at new Promise (<anonymous>)
at promiseOrCallback (/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)
at Mongoose._promiseOrCallback (/app/node_modules/mongoose/lib/index.js:1149:10)
at Mongoose.connect (/app/node_modules/mongoose/lib/index.js:350:20)
at /app/src/index.ts:28:20
at step (/app/src/index.ts:33:23)
at Object.next (/app/src/index.ts:14:53)
at /app/src/index.ts:8:71
at new Promise (<anonymous>)
at __awaiter (/app/src/index.ts:4:12)
at start (/app/src/index.ts:25:15)
at Object.<anonymous> (/app/src/index.ts:43:1)
at Module._compile (node:internal/modules/cjs/loader:1109:14) {
reason: TopologyDescription {
type: 'Single',
setName: null,
maxSetVersion: null,
maxElectionId: null,
servers: Map(1) { 'auth-mongo-srv:27017' => [ServerDescription] },
stale: false,
compatible: true,
compatibilityError: null,
logicalSessionTimeoutMinutes: null,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
commonWireVersion: null
}
Do I need to run the Mongo DB service locally to connect to the Kubernetes mongo image?
Below image is the Docker and Kubernetes versions.
Please use the container port to in stateful set or deployment to open the port.
Deployment example :
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
type: LoadBalancer
ports:
- port: 27017
name: http
selector:
app: mongo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
version: v1
spec:
containers:
- name: mongo
image: mongo:latest
ports:
- containerPort: 27017
statefulsets example :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
app: mongodb
spec:
replicas: 1
template:
metadata:
labels:
app: mongodb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-volume
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
here is an example statefulset for mongo db.

Kubernetes ConfigMap to write Node details to file

How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.

kubectl apply -f <spec.yaml> equivalent in fabric8 java api

I was trying to use io.fabric8 api to create a few resources in kubernetes using a pod-spec.yaml.
Config config = new ConfigBuilder()
.withNamespace("ag")
.withMasterUrl(K8_URL)
.build();
try (final KubernetesClient client = new DefaultKubernetesClient(config)) {
LOGGER.info("Master: " + client.getMasterUrl());
LOGGER.info("Loading File : " + args[0]);
Pod pod = client.pods().load(new FileInputStream(args[0])).get();
LOGGER.info("Pod created with name : " + pod.toString());
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
The above code works if the resource type is of POD. Similarly for other resource type it is working fine.
But if the yaml has multiple resource type like POD and service in the same file, how to use fabric8 Api ?
I was trying to use client.load(new FileInputStream(args[0])).createOrReplace(); but it is crashing with the below exception:
java.lang.NullPointerException
at java.net.URI$Parser.parse(URI.java:3042)
at java.net.URI.<init>(URI.java:588)
at io.fabric8.kubernetes.client.utils.URLUtils.join(URLUtils.java:48)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:53)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:32)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:202)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:62)
at com.nokia.k8s.InterpreterLanuch.main(InterpreterLanuch.java:66)
Yaml file used
apiVersion: v1
kind: Pod
metadata:
generateName: zep-ag-pod
annotations:
kubernetes.io/psp: restricted
spark-app-name: Zeppelin-spark-shared-process
namespace: ag
labels:
app: zeppelin
int-app-selector: shell-123
spec:
containers:
- name: ag-csf-zep
image: bcmt-registry:5000/zep-spark2.2:9
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c","echo Hi && sleep 60 && echo Done"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
securityContext:
fsGroup: 2000
runAsUser: 1510
serviceAccount: csfzeppelin
serviceAccountName: csfzeppelin
---
apiVersion: v1
kind: Service
metadata:
name: zeppelin-service
namespace: ag
labels:
app: zeppelin
spec:
type: NodePort
ports:
- name: zeppelin-service
port: 30099
protocol: TCP
targetPort: 8080
selector:
app: zeppelin
You don't need to specify resource type whenever loading a file with multiple documents. You simply need to do:
// Load Yaml into Kubernetes resources
List<HasMetadata> result = client.load(new FileInputStream(args[0])).get();
// Apply Kubernetes Resources
client.resourceList(result).inNamespace(namespace).createOrReplace()