Kubernetes Yaml Generator UI , yaml builder for kubernetes [closed] - kubernetes

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Is there any tool , online or self hosted , that takes all the values in UI as input and generate the full declarative yaml for the following kubernetes objects:
Deployment, with init containers and imagepullsecrets and other options
Service
ConfigMap
Secret
Daemonset
StatefulSet
Namespaces and quotas
RBAC resources
Edit:
I have been using kubectl create and kubectl run , but they dont spupport all the possible configuration options , and you still need to rememer all the options it supports , in UI one would be able to select from the give options for each resource.

The closest is kubectl create .... and kubectl run ...... Run them with -o yaml --dry-run > output.yaml. This won't create the resource, but will write the resource description to output.yaml file.

Found yipee.io that supports all the options and resources:
# Generated 2018-10-18T11:07:27.621Z by Yipee.io
# Application: nginx
# Last Modified: 2018-10-18T11:07:27.621Z
apiVersion: v1
kind: Service
metadata:
namespace: webprod
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 8080
name: nginx-hhpt
protocol: TCP
nodePort: 30003
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
namespace: webprod
annotations:
yipee.io.lastModelUpdate: '2018-10-18T11:07:27.595Z'
spec:
selector:
matchLabels:
name: nginx
component: nginx
app: nginx
rollbackTo:
revision: 0
template:
spec:
imagePullSecrets:
- name: imagsecret
containers:
- volumeMounts:
- mountPath: /data
name: nginx-vol
name: nginx
ports:
- containerPort: 80
protocol: TCP
name: http
imagePullPolicy: IfNotPresent
image: docker.io/nginx:latest
volumes:
- name: nginx-vol
hostPath:
path: /data
type: Directory
serviceAccountName: test
metadata:
labels:
name: nginx
component: nginx
app: nginx
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
replicas: 1
revisionHistoryLimit: 3

I have tried to address the same issue using a Java client based on the most popular Kubernetes Java Client:
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>4.1.3</version>
</dependency>
It allows you to set the most exotic options... but the API is not very fluent (or I have not found yet the way to use it fluently) so the code becomes quite verbose... Building a UI is a challenge, because of the extreme complexity of the model.
yipee.io sounds promising though, but I didn't understand how to get a trial version.

Related

Traefik IngressRoute CRD not Registering Any Routes

I'm configuring Traefik Proxy to run on a GKE cluster to handle proxying to various microservices. I'm doing everything through their CRDs and deployed Traefik to the cluster using a custom deployment. The Traefik dashboard is accessible and working fine, however when I try to setup an IngressRoute for the service itself, it is not accessible and it does not appear in the dashboard. I've tried setting it up with a regular k8s Ingress object and when doing that, it did appear in the dashboard, however I ran into some issues with middleware, and for ease-of-use I'd prefer to go the CRD route. Also, the deployment and service for the microservice seem to be deploying fine, they both appear in the GKE dashboard and are running normally. No ingress is created, however I'm unsure of if a custom CRD IngressRoute is supposed to create one or not.
Some information about the configuration:
I'm using Kustomize to handle overlays and general data
I have a setting through kustomize to apply the namespace users to everything
Below are the config files I'm using, and the CRDs and RBAC are defined by calling
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: ${IMAGE}
imagePullPolicy: IfNotPresent
ports:
- name: web
containerPort: ${HTTP_PORT}
readinessProbe:
httpGet:
path: /ready
port: web
initialDelaySeconds: 10
periodSeconds: 2
envFrom:
- secretRef:
name: users-service-env-secrets
service.yml
apiVersion: v1
kind: Service
metadata:
name: users-service
spec:
ports:
- name: web
protocol: TCP
port: 80
targetPort: web
selector:
app: users-service
ingress.yml
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: users-stripprefix
spec:
stripPrefix:
prefixes:
- /userssrv
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: users-service-ingress
spec:
entryPoints:
- service-port
routes:
- kind: Rule
match: PathPrefix(`/userssrv`)
services:
- name: users-service
namespace: users
port: service-port
middlewares:
- name: users-stripprefix
If any more information is needed, just lmk. Thanks!
A default Traefik installation on Kubernetes creates two entrypoints:
web for http access, and
websecure for https access
But you have in your IngressRoute configuration:
entryPoints:
- service-port
Unless you have explicitly configured Traefik with an entrypoint named "service-port", this is probably your problem. You want to remove the entryPoints section, or specify something like:
entryPoints:
- web
If you omit the entryPoints configuration, the service will be available on all entrypoints. If you include explicit entrypoints, then the service will only be available on those specific entrypoints (e.g. with the above configuration, the service would be available via http:// and not via https://).
Not directly related to your problem, but if you're using Kustomize, consider:
Drop the app: users-service label from the deployment, the service selector, etc, and instead set that in your kustomization.yaml using the commonLabels directive.
Drop the explicit namespace from the service specification in your IngressRoute and instead use kustomize's namespace transformer to set it (this lets you control the namespace exclusively from your kustomization.yaml).
I've put together a deployable example with all the changes mentioned in this answer here.

StatefulSet: Longer rolling update lead Version mismatching

Application is deployed on K8s using StatefulSet because of stateful in nature. There is around 250+ pods are running and HPA has been implemented on it too that can scale upto 400 pods.
When new deployment occurs, it takes longer time (~ 10-15m) to update all pods in Rolling Update fashion.
Problem: End user get response from 2 version of pods until all pods are replaced with new revision.
I am googling for an architecture where overall deployment time can be reduced and getting the best possible solutions to use BLUE/GREEN strategy but it has bunch of impact with integrated services like monitoring, logging, telemetry etc because of 2 naming conventions.
Ideally I am looking for a solutions like maxSurge for Deployment in which firstly new pods are created and then traffic are shifted to it at a time but in case of StatefulSet, it won't support maxSurge with RollingUpdate strategy & controller will delete and recreate each Pod in the StatefulSet based on ordinal index from bigger to smaller.
The solution is to do a partitioning rolling update along with a canary deployment.
Let’s suppose we have the statefulset workload defined by the following yaml file:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
version: "1.20"
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
version: "1.20"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # Label selector that determines which Pods belong to the StatefulSet
# Must match spec: template: metadata: labels
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx # Pod template's label selector
version: "1.20"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.20
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
You could patch the statefulset to create a partition, and change the image and version label for the remaining pods: (In this case, since there are only 3 pods, the last one will be the one that will change its image.)
$ kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"nginx:1.21"}]'
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/metadata/labels/version", "value":"1.21"}]'
At this point, you have a pod with the new image and version label ready to use, but since the version label is different, the traffic is still going to the other two pods. If you change the version in the yaml file and apply the new configuration, the rollout will be transparent, since there is already a pod ready to migrate the traffic:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
version: "1.21"
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
version: "1.21"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # Label selector that determines which Pods belong to the StatefulSet
# Must match spec: template: metadata: labels
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx # Pod template's label selector
version: "1.21"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
$ kubectl apply -f file-name.yaml
Once traffic is migrated to the pod containing the new image and version label, you should patch again the statefulset and remove the partition with the command kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'
Note: You will need to be very careful with the size of the partition, since the remaining pods will handle the whole traffic for some time.

Kubernetes HA data across several workers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have set up a Kubernetes system with 1 Master and 3 worker Nodes, and a load balancer. But at the moment my pipes are stuck as I'm struggling to find a solution, how can I setup a WordPress website with traffic that is replicated on all nodes. All for me is clear only I don't get, how to get all 3 Workers ( VPS servers in different countries) to have the same data so that pods can work and scale, and if one worker is dead the second and third can continue providing all services. IS PVE the solution or some other? Please point me in the direction to start searching.
Thanks.
You can create a PersistentVolumeClaim in ReadWriteMany mode that creates a PersistentVolume that holds your WordPress site data then create a Deployment with 3 replicas that mounts that PersistentVolume.
Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-data
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
storageClass: fast # update this to whatever persistent storage class is available on your cluster. See https://kubernetes.io/docs/concepts/storage/storage-classes/
resources:
requests:
storage: 10Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:latest
ports:
- containerPort: 80
name: http
protocol: TCP
volumeMounts:
- mountPath: "/var/www/html"
name: wordpress-data
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress-data # notice this is referencing the PersistentVolumeClaim we declared above
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
type: NodePort # or LoadBalancer
selector:
app: wordpress
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80

how to use "kubectl apply -f <file.yaml> --force=true" making an impact over a deployed container EXEC console?

I am trying to redeploy the exact same existing image, but after changing a secret in the Azure Vault. Since it is the same image that's why kubectl apply doesn't deploy it. I tried to make the deploy happen by adding a --force=true option. Now the deploy took place and the new secret value is visible in the dashboard config map, but not in the API container kubectl exec console prompt in the environment.
Below is one of the 3 deploy manifest (YAML file) for the service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tube-api-deployment
namespace: tube
spec:
selector:
matchLabels:
app: tube-api-app
replicas: 3
template:
metadata:
labels:
app: tube-api-app
spec:
containers:
- name: tube-api
image: ReplaceImageName
ports:
- name: tube-api
containerPort: 80
envFrom:
- configMapRef:
name: tube-config-map
imagePullSecrets:
- name: ReplaceRegistrySecret
---
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: tube
spec:
ports:
- name: api-k8s-port
protocol: TCP
port: 8082
targetPort: 3000
selector:
app: tube-api-app
I think it is not happening because when we update a ConfigMap, the files in all the volumes referencing it are updated. It’s then up to the pod container process to detect that they’ve been changed and reload them. Currently, there is no built-in way to signal an application when a new version of a ConfigMap is deployed. It is up to the application (or some helper script) to look for the config files to change and reload them.

Kubernetes multi-container pod [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
Hello I try to have a Pod with 2 container, one a c++ app, one a mysql database. I used to have the mysql deployed in its own service, but i got latency issue. So i want to try multi-container pod.
But i've been struggling to connect my app with the mysql through localhost. It says..
Can\'t connect to local MySQL server through socket
\'/var/run/mysqld/mysqld.sock
Here is my kubernetes.yaml. Please I need help :(
# Database setup
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storage-camera
labels:
group: camera
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: camera-pv
labels:
group: camera
spec:
storageClassName: db-camera
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: storage-camera
---
# Service setup
apiVersion: v1
kind: Service
metadata:
name: camera-service
labels:
group: camera
spec:
ports:
- port: 50052
targetPort: 50052
selector:
group: camera
tier: service
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: camera-service
labels:
group: camera
tier: service
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
minReadySeconds: 60
template:
metadata:
labels:
group: camera
tier: service
spec:
containers:
- image: asia.gcr.io/test/db-camera:latest
name: db-camera
env:
- name : MYSQL_ROOT_PASSWORD
value : root
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: camera-persistent-storage
mountPath: /var/lib/mysql
- name: camera-service
image: asia.gcr.io/test/camera-service:latest
env:
- name : DB_HOST
value : "localhost"
- name : DB_PORT
value : "3306"
- name : DB_NAME
value : "camera"
- name : DB_ROOT_PASS
value : "password"
ports:
- name: http-cam
containerPort: 50052
volumes:
- name: camera-persistent-storage
persistentVolumeClaim:
claimName: camera-pv
restartPolicy: Always
Your MySQL client is configured to use a socket and not talk over the network stack, cf. the MySQL documentation:
On Unix, MySQL programs treat the host name localhost specially, in a
way that is likely different from what you expect compared to other
network-based programs. For connections to localhost, MySQL programs
attempt to connect to the local server by using a Unix socket file.
This occurs even if a --port or -P option is given to specify a port
number. To ensure that the client makes a TCP/IP connection to the
local server, use --host or -h to specify a host name value of
127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by
using the --protocol=TCP option.
If you still want camera-service to talk over the file system socket you need to mount the file system for the camera-service as well. Currently you only mount it for db-camera