How to run a hello word Vapor app in minikube? How to set up hashicorps in Kubernetes? - kubernetes

I am new to Kubernetes. Did this so far:
vapor new hello -n
open Package.swift
ls
cd hello
open Package.swift
swift run
docker compose build
docker image ls
docker compose up app
minikube kubectl -- apply -f docker-compose.yml
minikube kubectl -- apply -f docker-compose.yml --validate=false
based on this tutorial: https://docs.vapor.codes/deploy/docker/
and video: https://www.youtube.com/watch?v=qFhzu7LolUU
but I got following error in two last line:
kukodajanos#Kukodas-MacBook-Pro hello % minikube kubectl -- apply -f docker-compose.yml
error: error validating "docker-compose.yml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
Someone said, I need to set up a deployment file?! https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment
My second goal is I would have hashicorp install in the cluster to be able to return short living secrets. I.e. secret for connection to a database which is used by the cluster. Would you give a step by step tutorial how can I do it?
// docker-compose.yml
x-shared_environment: &shared_environment
LOG_LEVEL: ${LOG_LEVEL:-debug}
services:
app:
image: hello:latest
build:
context: .
environment:
<<: *shared_environment
ports:
- '8080:8080'
# user: '0' # uncomment to run as root for testing purposes even though Dockerfile defines 'vapor' user.
command: ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]

So in simple words when you try to apply a file in Kubernetes you will need to follow a basic template which make Kubernetes understand what kind of resource you are trying to create. One of this is apiVersion so please try to follow the below deployment I was not able to find the docker image for the application here you will need to just add the docker image and port number where the application runs.
If you have the Dockerfile you can build and push the image to container registry and then use the image tag to pull the same image.
Reference : How to write Kubernetes manifest file
apiVersion: apps/v1
kind: Deployment
metadata:
name: vaporapp
labels:
app: vaporapp
spec:
replicas: 2
selector:
matchLabels:
app: vaporapp
template:
metadata:
labels:
app: vaporapp
spec:
containers:
- name: vaporapp
image: signalsciences/example-helloworld:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: vapor-service
labels:
app: vaporapp
spec:
ports:
- name: http
port: 8000
targetPort: 8000
selector:
app: vaporapp
type: LoadBalancer

Related

how to use "kubectl apply -f <file.yaml> --force=true" making an impact over a deployed container EXEC console?

I am trying to redeploy the exact same existing image, but after changing a secret in the Azure Vault. Since it is the same image that's why kubectl apply doesn't deploy it. I tried to make the deploy happen by adding a --force=true option. Now the deploy took place and the new secret value is visible in the dashboard config map, but not in the API container kubectl exec console prompt in the environment.
Below is one of the 3 deploy manifest (YAML file) for the service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tube-api-deployment
namespace: tube
spec:
selector:
matchLabels:
app: tube-api-app
replicas: 3
template:
metadata:
labels:
app: tube-api-app
spec:
containers:
- name: tube-api
image: ReplaceImageName
ports:
- name: tube-api
containerPort: 80
envFrom:
- configMapRef:
name: tube-config-map
imagePullSecrets:
- name: ReplaceRegistrySecret
---
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: tube
spec:
ports:
- name: api-k8s-port
protocol: TCP
port: 8082
targetPort: 3000
selector:
app: tube-api-app
I think it is not happening because when we update a ConfigMap, the files in all the volumes referencing it are updated. It’s then up to the pod container process to detect that they’ve been changed and reload them. Currently, there is no built-in way to signal an application when a new version of a ConfigMap is deployed. It is up to the application (or some helper script) to look for the config files to change and reload them.

Kubernetes pod definition file issue

I can run the Docker container for wso2 ei with following command.
docker run -it -p 8280:8280 -p 8243:8243 -p 9443:9443 -v wso2ei:/home/wso2carbon --name integrator wso2/wso2ei-integrator
I'm trying to create the pod definition file for the same. I don't know how to do port mapping and volume mapping in pod definition file. The following is the file I have created up to now. How can I complete the rest?
apiVersion: v1
kind: Pod
metadata:
name: ei-pod
labels:
type: ei
version: 6.6.0
spec:
containers:
- name: integrator
image: wso2/wso2ei-integrator
Here is YAML content which might work:
apiVersion: v1
kind: Pod
metadata:
name: ei-pod
labels:
type: ei
version: 6.6.0
spec:
containers:
- name: integrator
image: wso2/wso2ei-integrator
ports:
- containerPort: 8280
volumeMounts:
- mountPath: /wso2carbon
name: wso2ei
volumes:
- name: wso2ei
hostPath:
# directory location on host
path: /home/wso2carbon
While the above YAML content is just a basic example, it's not recommended for production usage because of two reasons:
Use deployment or statefulset or daemonset instead of pods directly.
hostPath volume is not sharable between nodes. So use external volumes such as NFS or Block and mount it into the pod. Also look at dynamic volume provisioning using storage class.

unable to recognize "deployment.yml": yaml: line 3: mapping values are not allowed in this context

I have setup a gitlab CI/CD pipeline, which builds and deploys docker images to kubernetes. I'm using yaml based deployment to kubernetes. When i run the pipeline the gitlab-runner always throws "unable to recognize yaml line 3: mapping values are not allowed in this context", but when i run it directly using kubectl create -f deployment.yaml, it runs correctly.
Here's my first few lines of the yml file. I have already validated the yml formatting. The error is thrown at line 3.
apiVersion: v1
kind: Service
metadata:
labels:
app: configserver
name: configserver
spec:
ports:
- name: http
port: 8888
selector:
app: configserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: configserver
name: configserver
spec:
replicas: 1
selector:
matchLabels:
app: configserver
template:
metadata:
creationTimestamp: null
labels:
app: configserver
spec:
containers:
- image: config-server:latest
name: configserver
ports:
- containerPort: 8888
resources: {}
restartPolicy: Always
It this something to do with gitlab?
Thanks.
EDIT:
Here's the relevant part of my .gitlab-ci.yml
stages:
- build
- deploy
build:
stage: build
script:
- mvn clean install -DskipTests
- docker-compose -f docker-compose-istio.yml build
- docker-compose -f docker-compose-istio.yml push
deploy:
stage: deploy
script:
- kubectl apply -f itp-ms-deploy.yml
- kubectl apply -f itp-ms-gateway.yml
- kubectl apply -f itp-ms-autoscale.yml
when: manual
only:
- master

Error deployment aspnetcore webapi to minikube

When I try to execute this command kubectl apply -f mydeployment.yaml I receive an error error: SchemaError(io.k8s.api.core.v1.ContainerState): invalid object doesn't have additional properties. What can I do to deploy my aspnetcore webapi successfully to my local Kubernetes cluster?
I've already tried to upgrade minikube by running the command choco upgrade minikube. It says I've already have te latest version. minikube v1.0.0 is the latest version available based on your source(s).
My deployment.yaml I've created looks like this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
Cleanup everything before you start:
rm -rf ~/.minikube
As per documentation:
You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3
master. Using the latest version of kubectl helps avoid unforeseen issues.
Minikube resources on Github you can find here:
To avoid interaction issues - Update default Kubernetes version to v1.14.0 #3967
NOTE: , we also recommend updating kubectl to a recent release (v1.13+)
For the latest version of minikube please follow official documentation here.
Kubernetes blog - here,
Stackoverlow here,
Choco here,
In the attached deployment there was indentation problem (corrected) so please try again.
spec:
containers:
- name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
The containers element expects a list, so you need to prefix each entry with a dash.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
If you are unsure you can always use kubectl to validate your file without creating it:
kubectl apply -f sample.yaml --validate --dry-run Just in case make sure that your kubectl version matches the version of your kubernetes cluster.

Copy command crash pod startup on Kubernetes

I'm new with Kubernetes and I'm trying to understand about the commands.
Basically what I'm trying to do is to create a Tomcat deployment, add an nfs and after that I copy the war file to the tomcat webapps.
But its failing
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp11-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: webapp11
spec:
volumes:
- name: www-persistent-storage
persistentVolumeClaim:
claimName: claim-webapp11
containers:
- name: webapp11-pod
image: tomcat:8.0
volumeMounts:
- name: www-persistent-storage
mountPath: /apps/build
command: ["sh","-c","cp /apps/build/v1/sample.war /usr/local/tomcat/webapps"]
ports:
- containerPort: 8080
As far as I understand when the image when an image has a command, like catalina.sh run on the Tomcat image, it will have a conflict with a command from the kubernetes.
Is that correct?
There is anyway to run a command after the pod starts?
thanks
No, what you want is probably something like this:
command: ["sh","-c","cp /apps/build/v1/sample.war /usr/local/tomcat/webapps && exec /whatever/catalina.sh"]
Or you could move the cp into an initContainer so you don't have to override the default command for your Tomcat container.