Kubernetes kerberos and kafka secret setup using helm - kubernetes

I have a requirement to consume from Kafka, which has SASA_PLAINTEXT protocol.
My application is springboot app and I am trying to deploy it in kubernetes using helm chart.
I have key tab added as kubernetes secret that I mounted as file using below code :
apiVersion: v1
kind: Pod
metadata:
name: service-name
spec:
volumes:
- name: Kafka-secret
secret:
secretName : kafka-keytab
emptyDir: {}
containers:
- name: redis
image: redis
volumeMounts:
- name: Kafka-secret
mountPath: “/etc/security”
I am specifying that mounted location on key tab in spring.jaas.config in application.yaml
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/etc/security/keytabs/kafka-keytab“ (This is a mounted path on kubernetes and kafka-vol is key name) \
principal="kafka-client-1#EXAMPLE.COM";
I have kerberos setup. Currently I am adding krb5.cong in Dockerfile using below
FROM java-jdk:11
ADD service-name.tar /
ADD krb5.conf /etc/krb5.conf
ENTRYPOINT java -Djava.security.krb5.conf=/etc/krb5.conf -jar /<jar-path>
I am getting below error after starting pod in kubernets :
2019-08-14T09:49:51.949-05:00 [APP/PROC/WEB/0] [OUT] INFO [d3-5b28248c661c] o.a.k.common.network.SaslChannelBuilder o.a.k.c.n.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:119) - ||||||||||||||Failed to create channel due to :
org.apache.kafka.common.KafkaException: Failed to configure SaslClientAuthenticator at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.configure(SaslClientAuthenticator.java:125) at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: Failed to create SaslClient with mechanism GSSAPI
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslClient(SaslClientAuthenticator.java:140)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslClient(SaslClientAuthenticator.java:131) ... 11 common frames omitted
Caused by: org.ietf.jgss.GSSException: Invalid name provided (Mechanism level: KrbException: Cannot locate default realm)
at sun.security.jgss.krb5.Krb5NameElement.getInstance(Krb5NameElement.java:129)
at sun.security.jgss.krb5.Krb5MechFactory.getNameElement(Krb5MechFactory.java:95)
Please let me know if any information is needed. Appreciate any pointers or help regarding this issue.

Related

Not able to start apache-nifi in aks

Hi all I am working on Nifi and I am trying to install it in AKS (Azure kubernetes service).
Using nifi 1.9.2 version. While installing it in AKS gives me an error
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
sed: preserving permissions for ‘/opt/nifi/nifi-current/conf/sedSFiVwC’: Operation not permitted
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
sed: preserving permissions for ‘/opt/nifi/nifi-current/conf/sedK3S1JJ’: Operation not permitted
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
sed: preserving permissions for ‘/opt/nifi/nifi-current/conf/sedbcm91T’: Operation not permitted
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
sed: preserving permissions for ‘/opt/nifi/nifi-current/conf/sedIuYSe1’: Operation not permitted
NiFi running with PID 28.
The specified run.as user nifi
does not exist. Exiting.
Received trapped signal, beginning shutdown...
Below is my nifi.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nifi-core
spec:
replicas: 1
selector:
matchLabels:
app: nifi-core
template:
metadata:
labels:
app: nifi-core
spec:
containers:
- name: nifi-core
image: my-azurecr.io/nifi-core-prod:1.9.2
env:
- name: NIFI_WEB_HTTP_PORT
value: "8080"
- name: NIFI_VARIABLE_REGISTRY_PROPERTIES
value: "./conf/custom.properties"
resources:
requests:
cpu: "6"
memory: 12Gi
limits:
cpu: "6"
memory: 12Gi
ports:
- containerPort: 8080
volumeMounts:
- name: my-nifi-core-conf
mountPath: /opt/nifi/nifi-current/conf
volumes:
- name: my-nifi-core-conf
azureFile:
shareName: my-file-nifi-core/nifi/conf
secretName: my-nifi-secret
readOnly: false
I have some customization in nifi Dockerfile, which copies some config files related to my configuration. When I ran my-azurecr.io/nifi-core-prod:1.9.2 docker image on my local it works as expected
But when I try to run it on AKS its giving above error. since its related to permissions I have tried with both user nifi and root in Dockerfile.
All the required configuration files are provided in volume my-nifi-core-conf running in same resourse group.
Since I am starting nifi with docker my exception is, it will behave same regardless of environment. Either on my local or in AKS.
But error also say user nifi does not exist. The official nifi-image setup the user requirement.
Can anyone help, I cant event start container in interaction mode as pods in not in running mode. Thanks in advance.
I think your missing the Security Context definition for your Kubernetes Pod. The user that Nifi runs under within a Docker has a specific UID and GID, and with the error message you getting, I would suspect that because that user is not defined in the Pod's security context it's not launching as expected.
Have a look at section on the Kubernetes documentation about security contexts, and that should be enough get you started.
I would also have a look at using something like Minikube when testing Kubernetes deployments as Kubernetes adds a large number of controls around a container engine like Docker.
Security Contexts Docs: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Minikube: https://kubernetes.io/docs/setup/learning-environment/minikube/
If you never figured this out, I was able to do this by running an initContainer before the main container, and changing the directory perms there.
initContainers:
- name: init1
image: busybox:1.28
volumeMounts:
- name: nifi-pvc
mountPath: "/opt/nifi/nifi-current"
command: ["sh", "-c", "chown -R 1000:1000 /opt/nifi/nifi-current"] #or whatever you want to do as root
update: does not work with nifi 1.14.0 - works with 1.13.2

How to pass JAAS configuration kafka env variables kubernetes

I am trying to authenticate my Kafka rest proxy with SASL but I am having trouble transferring the configs made in my local docker compose to Kubernetes.
I am using JAAS configuration to achieve this.
My JAAS file looks like this.
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="rest"
password="rest-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="rest"
password="restsecret";
};
and then in my docker compose I have done:
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/rest_jaas.conf
How will I transfer this same logic to Kubernetes?
I have tried passing the env variable like this:
env:
- name: KAFKA_OPTS
value: |
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="rest"
password="rest-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="rest"
password="rest-secret";
};
but it still fails. Here is what my logs say:
Error: Could not find or load main class KafkaClient
/bin/sh: 3: org.apache.kafka.common.security.plain.PlainLoginModule: not found
/bin/sh: 6: Syntax error: "}" unexpected
Your help will be highly appreciated.
Save your Kafka JAAS config file as rest_jaas.conf. Then execute:
kubectl create secret generic kafka-secret --from-file=rest_jaas.conf
Then in your deployment you insert:
env:
- name: KAFKA_OPTS
value: -Djava.security.auth.login.config=/etc/kafka/secrets/rest_jaas.conf
volumeMounts:
- name: kafka-secret
mountPath: /etc/kafka/secrets
subPath: rest_jaas.conf
volumes:
- name: kafka-secret
secret:
secretName: kafka-secret

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"

Deploying Node.js apps with Kubernetes

I was trying to deploy a very basic Express app, a small server listening on 8080 on a EC2 server (Ubuntu 16.04) following this tutorial. On that server, it was created a Kubernetes cluster through kops 1.8.0.
After that, I created a Dockerfile like the following:
FROM node:carbon
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
# At the end, set the user to use when running this image
USER node
After that, I built the image with docker build -t ccastelli/stupid_server:test1, I specified my credentials with docker login -u ccastelli, I copied the imaged ID from docker images, tagged it docker tag c549618dcd86 org/test:first_try and pushed with docker push org/test on a private repository in cloud.docker.com.
After that I created a cluster secret with kubectl create secret docker-registry ccastelli-regcred --docker-server=docker.com --docker-username=ccastelli --docker-password='pass' --docker-email=myemail#gmail.com
After that I created a deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stupid-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: stupid-server
spec:
containers:
- name: stupid-server
image: org/test:first_try
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: ccastelli-regcred
I see from kubectl get pods that the image transitioned from ErrPullImage to ImagePullBackOff and it's not ready. Anyway the docker container was working on the client instance but not in the cluster. At this point, I'm a bit lost. What am I doing wrong?
Thanks
Edit: message error:
Failed to pull image "org/test:first_try": rpc error: code =
Unknown desc = Error response from daemon: repository pycomio/test not
found: does not exist or no pull access
your --docker-server should be index.docker.io
DOCKER_REGISTRY_SERVER=https://index.docker.io/v1/
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL

Mounting client.crt, client.key, ca.crt with a service-account or otherwise?

Has anyone used service-accounts to mount ssl certificates to access the aws cluster from within a running job before? How do we do this? I created the job and this is the from the the output of the failing container which is causing the Pod to be in error state.
Error in configuration:
* unable to read client-cert /client.crt for test-user due to open /client.crt: no such file or directory
* unable to read client-key /client.key for test-user due to open /client.key: no such file or directory
* unable to read certificate-authority /ca.crt for test-cluster due to open /ca.crt: no such file or director
The solution is to create a Secret containing the certs, and then getting the job to reference it.
Step 1. Create secret:
kubectl create secret generic job-certs --from-file=client.crt --from-file=client.key --from-file=ca.crt
Step 2. Reference secret in job's manifest. You have to insert the volumes and volumeMounts in the job.
spec:
volumes:
- name: ssl
secret:
secretName: job-certs
containers:
volumeMounts:
- mountPath: "/etc/ssl"
name: "ssl"