How to distribute a file across presto cluster in kuebernetes - kubernetes

I'm new to Kubernetes. We have a presto (starburst) cluster deployed in Kubernetes and we are trying to implement SSL certificate for the presto cluster.
Based on the below URL, I have created a keystore (in my local machine) and have to populate this keystore path to 'http-server.https.keystore.path'
https://docs.starburstdata.com/latest/security/internal-communication.html
However, this file has to be distributed across the cluster. If I enter the local path then Kubernetes is throwing 'file not found' error. Could you please let me know how to distribute this in presto cluster in kubernetes.
I have tried creating the keystore as secret and mounted this to a volume.
kubectl create secret generic presto-keystore --from-file=./keystore.jks
kind: Presto
metadata:
name: stg-presto
spec:
clusterDomain: cluster.local
nameOverride: stg-presto
additionalVolumes:
- path: /jks
volume:
secret:
secretName: presto-keystore
additionalJvmConfigProperties: |
image:
name: xxxxx/presto
pullPolicy: IfNotPresent
tag: 323-e.8-k8s-0.20
prometheus:
enabled: true
additionalRules:
- pattern: 'presto.execution<name=TaskManager><>FailedTasks.TotalCount'
name: 'failed_tasks'
type: COUNTER
service:
type: NodePort
name: stg-presto
memory:
nodeMemoryHeadroom: 30Gi
xmxToTotalMemoryRatio: 0.9
heapHeadroomPerNodeRatio: 0.3
queryMaxMemory: 1Pi
queryMaxTotalMemoryPerNodePoolFraction: 0.333
coordinator:
cpuLimit: "5"
cpuRequest: "5"
memoryAllocation: "30Gi"
image:
pullPolicy: IfNotPresent
additionalProperties: |
http-server.http.enabled=false
node.internal-address-source=FQDN
http-server.https.enabled=true
http-server.https.port=8080
http-server.https.keystore.path=/jks/keystore.jks
http-server.https.keystore.key=xxxxxxx
internal-communication.https.required=true
internal-communication.https.keystore.path=/jks/keystore.jks
internal-communication.https.keystore.key=xxxxxxx
Also tried creating config and mounted it as a volume. But still getting 'Caused by: java.io.FileNotFoundException: /jks/keystore.jks (No such file or directory)'.
Could you please let me know if am missing anything.
Thanks

You can create a secret or Configmap using your keystore and mount it as volume and then use the path in your files.
How to create and use configMap in k8s here
How to configure a secret in k8s here
You can use both in a similar fashion in your Custom Resource as in any other resource. I see an option of additionalVolumes and documentation associated with it here

You can create a secret in K8s and mount it within Presto deployment using additionalVolumes property.
Checkout documentation on additionalVolumes at https://docs.starburstdata.com/latest/kubernetes/presto_resource.html

Create a secret from a file:
kubectl create secret generic cluster-keystore --from-file=./docker.cluster.jks
Add the secret in the "additionalVolumes" section in the yaml: (per Karol's URL above)
additionalVolumes:
- path: /jks
volume:
secret:
secretName: "cluster-keystore"
Add the jks file to the coordinator "additionalProperties" section in your yaml:
coordinator:
cpuRequest: 25
cpuLimit: 25
memoryAllocation: 110Gi
additionalProperties: |
http-server.https.enabled=true
http-server.https.port=8443
http-server.https.keystore.path=/jks/docker.cluster.jks
http-server.https.keystore.key=xxxxxxxxxxx
http-server.authentication.type=PASSWORD

Related

pod has unbound immediate PersistentVolumeClaims, I deployed milvus server using

etcd:
enabled: true
name: etcd
replicaCount: 3
pdb:
create: false
image:
repository: "milvusdb/etcd"
tag: "3.5.0-r7"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 2379
peerPort: 2380
auth:
rbac:
enabled: false
persistence:
enabled: true
storageClass:
accessMode: ReadWriteOnce
size: 10Gi
Enable auto compaction
compaction by every 1000 revision
autoCompactionMode: revision
autoCompactionRetention: "1000"
Increase default quota to 4G
extraEnvVars:
name: ETCD_QUOTA_BACKEND_BYTES
value: "4294967296"
name: ETCD_HEARTBEAT_INTERVAL
value: "500"
name: ETCD_ELECTION_TIMEOUTenter code here
value: "2500"
Configuration values for the pulsar dependency
ref: https://github.com/apache/pulsar-helm-chart
enter image description here
I am trying to run the milvus cluster using kubernete in ubuntu server.
I used helm menifest https://milvus-io.github.io/milvus-helm/
Values.yaml
https://raw.githubusercontent.com/milvus-io/milvus-helm/master/charts/milvus/values.yaml
I checked PersistentValumeClaim their was an error
no persistent volumes available for this claim and no storage class is set
This error comes because you dont have a Persistent Volume.
A pvc needs a a pv with at least the same capacity of the pvc.
This can be done manually or with a Volume provvisioner.
The most easy way someone would say is to use the local storageClass, which uses the diskspace from the node where the pod is instanciated, adds a pod affinity so that the pod starts allways on the same node and can use the volume on that disk. In your case you are using 3 replicas. Allthough its possible to start all 3 instances on the same node, this is mostlikly not what you want to achieve with Kubernetes. If that node breaks you wont have any other instance running on another node.
You need first to thing about the infrastructure of your cluster. Where should the data of the volumes be stored?
An Network File System, nfs, might be a could solution.
In this case you have an nfs somewhere in your infrastructure and all the nodes can reach it.
So you can create a PV which is accessible from all your node.
To not allocate a PV always manualy you can install a Volumeprovisioner inside your cluster.
I use in some cluster this one here:
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
As i said you must have already an nfs and configure the provvisioner.yaml with the path.
it looks like this:
# patch_nfs_details.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nfs-client-provisioner
name: nfs-client-provisioner
spec:
template:
spec:
containers:
- name: nfs-client-provisioner
env:
- name: NFS_SERVER
value: <YOUR_NFS_SERVER_IP>
- name: NFS_PATH
value: <YOUR_NFS_SERVER_SHARE>
volumes:
- name: nfs-client-root
nfs:
server: <YOUR_NFS_SERVER_IP>
path: <YOUR_NFS_SERVER_SHARE>
If you use an nfs without provvisioner, you need to define a storageClass which is linked to your nfs.
There are a lot of solutions to hold persitent volumes.
Here you can find a list of StorageClasses:
https://kubernetes.io/docs/concepts/storage/storage-classes/
At the end it depends also where your cluster is provvisioned if you are not managing it by yourself.

Kubectl rollout restart gets error: Unable to decode

When I want to restart a deployment by the following command: kubectl rollout restart -n ind-iv -f mattermost-installation.yml it returns an error: unable to decode "mattermost-installation.yml": no kind "Mattermost" is registered for version "installation.mattermost.com/v1beta1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
The yml file looks like this:
apiVersion: installation.mattermost.com/v1beta1
kind: Mattermost
metadata:
name: mattermost-iv # Choose the desired name
spec:
size: 1000users # Adjust to your requirements
useServiceLoadBalancer: true # Set to true to use AWS or Azure load balancers instead of an NGINX controller.
ingressName: ************* # Hostname used for Ingress, e.g. example.mattermost-example.com. Required when using an Ingress controller. Ignored if useServiceLoadBalancer is true.
ingressAnnotations:
kubernetes.io/ingress.class: nginx
version: 6.0.0
licenseSecret: "" # Name of a Kubernetes secret that contains Mattermost license. Required only for enterprise installation.
database:
external:
secret: db-credentials # Name of a Kubernetes secret that contains connection string to external database.
fileStore:
external:
url: ********** # External File Storage URL.
bucket: ********** # File Storage bucket name to use.
secret: file-store-credentials
mattermostEnv:
- name: MM_FILESETTINGS_AMAZONS3SSE
value: "false"
- name: MM_FILESETTINGS_AMAZONS3SSL
value: "false"
Anybody an idea?

How can I add root certs into my existing truststore.jks file using kubectl?

I am new to kubernetes and trying to add root certs to my existing secrets truststore.jks file. Using get secret mysecret -o yaml. I am able to view the details of truststore file inside mysecret but not sure how to replace with new truststore file or to edit the existing one with latest root certs. Can anyone help me to get the correct command to do this using kubectl?
Thanks
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. There is an official documentation about Kubernetes.io: Secrets.
Assuming that you created your secret by:
$ kubectl create secret generic NAME_OF_SECRET --from-file=keystore.jks
You can edit your secret by invoking command:
$ kubectl edit secret NAME_OF_SECRET
It will show you YAML definition similar to this:
apiVersion: v1
data:
keystore.jks: HERE_IS_YOUR_JKS_FILE
kind: Secret
metadata:
creationTimestamp: "2020-02-20T13:14:24Z"
name: NAME_OF_SECRET
namespace: default
resourceVersion: "430816"
selfLink: /api/v1/namespaces/default/secrets/jks-old
uid: 0ce898af-8678-498e-963d-f1537a2ac0c6
type: Opaque
To change it to new keystore.jks you would need to base64 encode it and paste in place of old one (HERE_IS_YOUR_JKS_FILE)
You can get a base64 encoded string by:
cat keystore.jks | base64
After successfully editing your secret it should give you a message:
secret/NAME_OF_SECRET edited
Also you can look on this StackOverflow answer
It shows a way to replace existing configmap but with a little of modification it can also replace a secret!
Example below:
Create a secret with keystore-old.jks:
$ kubectl create secret generic my-secret --from-file=keystore-old.jks
Update it with keystore-new.jks:
$ kubectl create secret generic my-secret --from-file=keystore-new.jks -o yaml --dry-run | kubectl replace -f -
Treating keystore.jks as a file allows you to use a volume mount to mount it to specific location inside a pod.
Example YAML below creates a pod with secret mounted as volume:
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "360000"
volumeMounts:
- name: secret-volume
mountPath: "/etc/secret"
volumes:
- name: secret-volume
secret:
secretName: NAME_OF_SECRET
Take a specific look on:
volumeMounts:
- name: secret-volume
mountPath: "/etc/secret"
volumes:
- name: secret-volume
secret:
secretName: NAME_OF_SECRET
This part will mount your secret inside your /etc/secret/ directory. It will be available there with a name keystore.jks
A word about mounted secrets:
Mounted Secrets are updated automatically
When a secret currently consumed in a volume is updated, projected keys are eventually updated as well. The kubelet checks whether the mounted secret is fresh on every periodic sync.
-- Kubernetes.io: Secrets.
Please let me know if you have any questions regarding that.

How to mount same volume on to all pods in a kubernetes namespace

We have a namespace in kubernetes where I would like some secrets (files like jks,properties,ts,etc.) to be made available to all the containers in all the pods (we have one JVM per container & one container per pod kind of Deployment).
I have created secrets using kustomization and plan to use it as a volume for spec of each Deployment & then volumeMount it for the container of this Deployment. I would like to have this volume to be mounted on each of the containers deployed in our namespace.
I want to know if kustomize (or anything else) can help me to mount this volume on all the deployments in this namespace?
I have tried the following patchesStrategicMerge
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: myNamespace
spec:
template:
spec:
imagePullSecrets:
- name: pull-secret
containers:
- volumeMounts:
- name: secret-files
mountPath: "/secrets"
readOnly: true
volumes:
- name: secret-files
secret:
secretName: mySecrets
items:
- key: key1
path: ...somePath
- key: key2
path: ...somePath
It requires name in metadata section which does not help me as all my Deployments have different names.
Inject Information into Pods Using a PodPreset
You can use a PodPreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time.
Update: Feb 2021. The PodPreset feature only made it to alpha. It was removed in v1.20 of kubernetes. See release note https://kubernetes.io/docs/setup/release/notes/
The v1alpha1 PodPreset API and admission plugin has been removed with
no built-in replacement. Admission webhooks can be used to modify pods
on creation. (#94090, #deads2k) [SIG API Machinery, Apps, CLI, Cloud
Provider, Scalability and Testing]
PodPresent (https://kubernetes.io/docs/tasks/inject-data-application/podpreset/) is one way to do this but for this all pods in your namespace should match the label you specify in PodPresent spec.
Another way (which is most popular) is to use Dynamic Admission Control (https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) and write a Mutating webhook in your cluster which will edit your pod spec and add all the secrets you want to mount. Using this you can also make other changes in your pod spec like mounting volumes, adding label and many more.
Standalone kustomize support a patch to many resources. Here is an example Patching multiple resources at once. the built-in kustomize in kubectl doesn't support this feature.
To mount secret as volume you need to update yaml construct for your pod/deployment manifest files and rebuild them.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- name: my-secret-volume
mountPath: /etc/secretpath
volumes:
- name: my-secret-volume
secret:
secretName: my-secret
kustomize (or anything else) will not mount it for you.

How do you get Jinja templates into spinnaker/echo for webhook processing?

I have Spinnaker 1.10.5 deployed to Azure Kubernetes Service using Halyard.
I am trying to get Azure Container Registry webhooks to trigger a pipeline. I found that you can set up echo to allow artifact webhooks using an echo-local.yml like this:
webhooks:
artifacts:
enabled: true
sources:
- source: azurecr
templatePath: /path/to/azurecr.jinja
However, I'm stuck on the templatePath value. Since I'm deploying with Halyard into Kubernetes, all the configuration files get mounted as volumes from Kubernetes secrets.
How do I get my Jinja template into my Halyard-deployed echo so it can be used in a custom webhook?
As of Halyard 1.13 there will be the ability to custom mount secrets in Kubernetes
Create a Kubernetes secret with your Jinja template.
apiVersion: v1
kind: Secret
metadata:
name: echo-webhook-templates
namespace: spinnaker
type: Opaque
data:
mytemplate: [base64-encoded-contents-of-template]
Set the templatePath in the ~/.hal/default/profiles/echo-local.yml to the place you're mounting the secret.
webhooks:
artifacts:
enabled: true
sources:
- source: mysource
templatePath: /mnt/webhook-templates/mytemplate
Add the mount to ~/.hal/default/service-settings/echo.yml
kubernetes:
volumes:
- id: echo-webhook-templates
type: secret
mountPath: /mnt/webhook-templates
Since Halyard 1.13 hasn't actually been released yet, I obviously haven't tried this, but it's how it should work. Also... I guess I may be stuck until then.