How to pass JAAS configuration kafka env variables kubernetes - kubernetes

I am trying to authenticate my Kafka rest proxy with SASL but I am having trouble transferring the configs made in my local docker compose to Kubernetes.
I am using JAAS configuration to achieve this.
My JAAS file looks like this.
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="rest"
password="rest-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="rest"
password="restsecret";
};
and then in my docker compose I have done:
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/rest_jaas.conf
How will I transfer this same logic to Kubernetes?
I have tried passing the env variable like this:
env:
- name: KAFKA_OPTS
value: |
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="rest"
password="rest-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="rest"
password="rest-secret";
};
but it still fails. Here is what my logs say:
Error: Could not find or load main class KafkaClient
/bin/sh: 3: org.apache.kafka.common.security.plain.PlainLoginModule: not found
/bin/sh: 6: Syntax error: "}" unexpected
Your help will be highly appreciated.

Save your Kafka JAAS config file as rest_jaas.conf. Then execute:
kubectl create secret generic kafka-secret --from-file=rest_jaas.conf
Then in your deployment you insert:
env:
- name: KAFKA_OPTS
value: -Djava.security.auth.login.config=/etc/kafka/secrets/rest_jaas.conf
volumeMounts:
- name: kafka-secret
mountPath: /etc/kafka/secrets
subPath: rest_jaas.conf
volumes:
- name: kafka-secret
secret:
secretName: kafka-secret

Related

How can sslcert and sslkey be passed as environment variables in Kubernetes?

I'm trying to make my app connect to my PostgreSQL instance through an encrypted and secure connection.
I've configured my server certificate and generated the client cert and key files.
The following command connects without problems:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=<instance_ip> \
port=5432 \
user=db dbname=dbname"
Unfortunately, I couldn't find a way to pass the client key as value, I can only pass the file path. Even using the default environment variables from psql, this is not possible: https://www.postgresql.org/docs/current/libpq-envars.html
Golang follows the same specifications as lib-pq and there is no way to pass the cert and key values: https://pkg.go.dev/github.com/lib/pq?tab=doc#hdr-Connection_String_Parameters.
I want to store the client cert and key in environment variables for security reasons, I don't want to store sensitive files in github/gitlab.
Just set the values in your environment and you can get them in a init function.
func init() {
var := os.Getenv("SOME_KEY")
}
When you want to set these with K8s you would just do this in a yaml file.
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
SOME_KEY: the-value-of-the-key
Then to inject into the environment do.
envFrom:
- secretRef:
name: my-secret
Now when your init function runs it will we able to see SOME_KEY.
If you want to pass a secret as a file you do something like this.
kubectl create secret generic my-secret-files --from-file=my-secret-file-1.stuff --from-file=my-secret-file-2.stuff
Then in your deployment.
volumes:
- name: my-secret-files
secret:
secretName: my-secret-files
Also in your deployment under you container.
volumeMounts:
- name: my-secret-files
mountPath: /config/
Now your init would be able to see.
/config/my-secret-file-1.stuff
/config/my-secret-file-2.stuff

Kubernetes - how to pass truststore path and password to JVM arguments

I need to add jks file to my JVM for SSL Handshake with the server. The JKS is mounted in volume and available to the docker container. How do I pass the JKS truststore path and password to the Springboot(JVM) during start up.
One option I think is as an environment variables (-Djavax.net.ssl.trustStore, -Djavax.net.ssl.trustStorePassword) . For Openshift, following works as described in the url below.
Option 1:
env:
- name: JAVA_OPTIONS
value: -Djavax.net.ssl.trustStore=/var/run/secrets/java.io/keystores/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit
https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/
But, I don't seem to find similar JAVA_OPTIONS environment variable for Kubernetes.
Option2 :
My Docker file is:
FROM openjdk:8-jre-apline
..........
........
ENTRYPOINT ["java", "-jar", "xxx.jar"]
Can this be changed as below and the $JAVA_OPTS can be set as env variable to JVM via configmap?
FROM openjdk:8-jre-apline
..........
........
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar xxx.jar" ]
Configmap:
JAVA_OPTS: "-Djavax.net.ssl.trustStore=/var/run/secrets/java.io/keystores/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"
Please suggest if this would work or anyother better solutions. More preferred if we can get store the password in secret.
A couple of options:
1: You can break it all up and use secrets to store your credentials only as env vars, secret to store the keystore which can be mounted as a file on disk in the container, and a ConfigMap to hold other java options as env variables then use an entrypoint script in your container to validate and mash it all together to form the JAVA_OPTS string.
2: You can put the whole string in a JAVA_OPTS secret that you consume at run-time.
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: myimage
env:
- name: JAVA_OPTS
valueFrom:
secretKeyRef:
name: mysecret
key: JAVA_OPTS
restartPolicy: Never
I am able to do this in a K8s deployment using _JAVA_OPTION environment variable for a Spring Boot 2.3.x application in Docker container running Java 8 (Got tip for this envvar from this SO answer https://stackoverflow.com/a/11615960/309261).
env:
- name: _JAVA_OPTIONS
value: >
-Djavax.net.ssl.trustStore=/path/to/truststore.jks
-Djavax.net.ssl.trustStorePassword=changeit

Kubernetes kerberos and kafka secret setup using helm

I have a requirement to consume from Kafka, which has SASA_PLAINTEXT protocol.
My application is springboot app and I am trying to deploy it in kubernetes using helm chart.
I have key tab added as kubernetes secret that I mounted as file using below code :
apiVersion: v1
kind: Pod
metadata:
name: service-name
spec:
volumes:
- name: Kafka-secret
secret:
secretName : kafka-keytab
emptyDir: {}
containers:
- name: redis
image: redis
volumeMounts:
- name: Kafka-secret
mountPath: “/etc/security”
I am specifying that mounted location on key tab in spring.jaas.config in application.yaml
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/etc/security/keytabs/kafka-keytab“ (This is a mounted path on kubernetes and kafka-vol is key name) \
principal="kafka-client-1#EXAMPLE.COM";
I have kerberos setup. Currently I am adding krb5.cong in Dockerfile using below
FROM java-jdk:11
ADD service-name.tar /
ADD krb5.conf /etc/krb5.conf
ENTRYPOINT java -Djava.security.krb5.conf=/etc/krb5.conf -jar /<jar-path>
I am getting below error after starting pod in kubernets :
2019-08-14T09:49:51.949-05:00 [APP/PROC/WEB/0] [OUT] INFO [d3-5b28248c661c] o.a.k.common.network.SaslChannelBuilder o.a.k.c.n.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:119) - ||||||||||||||Failed to create channel due to :
org.apache.kafka.common.KafkaException: Failed to configure SaslClientAuthenticator at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.configure(SaslClientAuthenticator.java:125) at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: Failed to create SaslClient with mechanism GSSAPI
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslClient(SaslClientAuthenticator.java:140)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslClient(SaslClientAuthenticator.java:131) ... 11 common frames omitted
Caused by: org.ietf.jgss.GSSException: Invalid name provided (Mechanism level: KrbException: Cannot locate default realm)
at sun.security.jgss.krb5.Krb5NameElement.getInstance(Krb5NameElement.java:129)
at sun.security.jgss.krb5.Krb5MechFactory.getNameElement(Krb5MechFactory.java:95)
Please let me know if any information is needed. Appreciate any pointers or help regarding this issue.

How to pass environmental variables in envconsul config file?

I read in the envconsul documentation this:
For additional security, tokens may also be read from the environment
using the CONSUL_TOKEN or VAULT_TOKEN environment variables
respectively. It is highly recommended that you do not put your tokens
in plain-text in a configuration file.
So, I have this envconsul.hcl file:
# the settings to connect to vault server
# "http://10.0.2.2:8200" is the Vault's address on the host machine when using Minikube
vault {
address = "${env(VAULT_ADDR)}"
renew_token = false
retry {
backoff = "1s"
}
token = "${env(VAULT_TOKEN)}"
}
# the settings to find the endpoint of the secrets engine
secret {
no_prefix = true
path = "secret/app/config"
}
However, I get this error:
[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Get $%7Benv%28VAULT_ADDR%29%7D/v1/secret/app/config: unsupported protocol scheme "" (retry attempt 1 after "1s")
As I understand it, it cannot do the variable substitution.
I tried to set "http://10.0.2.2:8200" and it works.
The same happens with the VAULT_TOKEN var.
If I hardcode the VAULT_ADDR, then I get this error:
[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Error making API request.
URL: GET http://10.0.2.2:8200/v1/secret/app/config
Code: 403. Errors:
* permission denied (retry attempt 2 after "2s")
Is there a way for this file to understand the environmental variables?
EDIT 1
This is my pod.yml file
---
apiVersion: v1
kind: Pod
metadata:
name: sample
spec:
serviceAccountName: vault-auth
restartPolicy: Never
# Add the ConfigMap as a volume to the Pod
volumes:
- name: vault-token
emptyDir:
medium: Memory
# Populate the volume with config map data
- name: config
configMap:
# `name` here must match the name
# specified in the ConfigMap's YAML
# -> kubectl create configmap vault-cm --from-file=./vault-configs/
name: vault-cm
items:
- key : vault-agent-config.hcl
path: vault-agent-config.hcl
- key : envconsul.hcl
path: envconsul.hcl
initContainers:
# Vault container
- name: vault-agent-auth
image: vault
volumeMounts:
- name: vault-token
mountPath: /home/vault
- name: config
mountPath: /etc/vault
# This assumes Vault running on local host and K8s running in Minikube using VirtualBox
env:
- name: VAULT_ADDR
value: http://10.0.2.2:8200
# Run the Vault agent
args:
[
"agent",
"-config=/etc/vault/vault-agent-config.hcl",
"-log-level=debug",
]
containers:
- name: python
image: myappimg
imagePullPolicy: Never
ports:
- containerPort: 5000
volumeMounts:
- name: vault-token
mountPath: /home/vault
- name: config
mountPath: /etc/envconsul
env:
- name: HOME
value: /home/vault
- name: VAULT_ADDR
value: http://10.0.2.2:8200
I. Within container specification set environmental variables (values in double quotes):
env:
- name: VAULT_TOKEN
value: "abcd1234"
- name: VAULT_ADDR
value: "http://10.0.2.2:8200"
Then refer to the values in envconsul.hcl
vault {
address = ${VAULT_ADDR}
renew_token = false
retry {
backoff = "1s"
}
token = ${VAULT_TOKEN}
}
II. Another option is to unseal the vault cluster (with the unseal key which was printed while initializing the vault cluster)
$ vault operator unseal
and then authenticate to the vault cluster using a root token.
$ vault login <your-generated-root-token>
More details
I tried many suggestions and nothing worked until I passed -vault-token argument to envconsul command like this:
envconsul -vault-token=$VAULT_TOKEN -config=/app/config.hcl -secret="/secret/debug/service" env
and in config.hcl it should be like this:
vault {
address = "http://kvstorage.try.direct:8200"
token = "${env(VAULT_TOKEN)}"
}

error: the server doesn't have resource type "svc"

Admins-MacBook-Pro:~ Harshin$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
i am following this document
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?refid=gs_card
while i am trying to test my configuration in step 11 of configure kubectl for amazon eks
apiVersion: v1
clusters:
- cluster:
server: ...
certificate-authority-data: ....
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "kunjeti"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE
# value: "<aws-profile>"
Change "name: kubernetes" to actual name of your cluster.
Here is what I did to work it through....
1.Enabled verbose to make sure config files are read properly.
kubectl get svc --v=10
2.Modified the file as below:
apiVersion: v1
clusters:
- cluster:
server: XXXXX
certificate-authority-data: XXXXX
name: abc-eks
contexts:
- context:
cluster: abc-eks
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "abc-eks"
# - "-r"
# - "<role-arn>"
env:
- name: AWS_PROFILE
value: "aws"
I have faced a similar issue, however this is not a direct solution but workaround. Use AWS cli commands to create cluster rather than console. As per documentation, the user or role which creates cluster will have master access.
aws eks create-cluster --name <cluster name> --role-arn <EKS Service Role> --resources-vpc-config subnetIds=<subnet ids>,securityGroupIds=<security group id>
Make sure that EKS Service Role has IAM access(I have given Full however AssumeRole will do I guess).
The EC2 machine Role should have eks:CreateCluster and IAM access. Worked for me :)
I had this issue and found it was caused default key setting in ~/.aws/credentials.
We have a few AWS accounts for different customers plus a sandbox account for our own testing and research. So our credentials file looks something like this:
[default]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[cpproto]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[sandbox]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
I was messing around our sandbox account but the [default] section was pointing to another account.
Once I put the keys for sandbox into the default section the "kubectl get svc" command worked fine.
Seems we need a way to tell aws-iam-authenticator which keys to use same as --profile in the aws cli.
I guess you should uncomment "env" item and change your refer to ~/.aws/credentials
Because your aws_iam_authenticator requires exact AWS credentials.
Refer this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
To have the AWS IAM Authenticator for Kubernetes always use a specific named AWS credential profile (instead of the default AWS credential provider chain), uncomment the env lines and substitute with the profile name to use.