I am trying to follow the instructions in this tutorial: https://docs.docker.com/docker-for-windows/kubernetes/#use-docker-commands. I have followed these steps:
1) Enable Kubernetes in Docker Desktop.
2) Create a simple asp.net core 3.1 app in Visual Studio 2019 and add container orchestration support (Docker Compose).
3) Run the app in Visual Studio 2019 to confirm it runs successfully in Docker.
4) Run the following command in DOS: docker-compose build kubernetesexample
5) Run the following command in DOS: docker stack deploy --compose-file docker-compose.yml mystack
6) Run the following command in DOS: kubectl get services. Here is the result:
How do I browse to my app? I have tried to browse to: http://localhost:5100 and http://localhost:32442.
Here is my docker-compose.yml:
services:
kubernetesexample:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "45678:80"
[1]: https://i.stack.imgur.com/FAkAZ.png
Here is the result of running: kubectl get svc kubernetesexample-published -o yaml:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-03-14T17:51:41Z"
labels:
com.docker.service.id: mystack-kubernetesexample
com.docker.service.name: kubernetesexample
com.docker.stack.namespace: mystack
name: kubernetesexample-published
namespace: default
ownerReferences:
- apiVersion: compose.docker.com/v1alpha3
blockOwnerDeletion: true
controller: true
kind: Stack
name: mystack
uid: 75f037b1-661c-11ea-8b7c-025000000001
resourceVersion: "1234"
selfLink: /api/v1/namespaces/default/services/kubernetesexample-published
uid: a8e6b35a-35d1-4432-82f7-108f30d068ca
spec:
clusterIP: 10.108.180.197
externalTrafficPolicy: Cluster
ports:
- name: 5100-tcp
nodePort: 30484
port: 5100
protocol: TCP
targetPort: 5100
selector:
com.docker.service.id: mystack-kubernetesexample
com.docker.service.name: kubernetesexample
com.docker.stack.namespace: mystack
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost
Please note the port has now changed:
Update
Service - An abstract way to expose an application running on a set of Pods as a network service.
Rather use Kubernetes documentation. They've interactive, in browser, examples. I see you tried to use LoadBalancer, this must be supported on cloud provider or properly set up environments. All publishing services are summered here. Try using NodePort, simple configuration 'd be eg.:
apiVersion: v1
kind: Service
metadata:
name: np-kubernetesexample
labels:
app: kubernetesexample
spec:
type: NodePort
ports:
port: 5100
protocol: TCP
targetPort: 5100
selector:
app: kubernetesexample
... from what I get gather from provided SCs and description, please check port and labels. If successful, application should be available on localhost:3xxxx, 2nd port described under PORTS when you type kubectl get services, xxxx:3xxxx/TCP.
It seems to work if I change my docker-compose to this:
version: '3.4'
services:
kubernetesexample:
image: kubernetesexample
ports:
- "80:80"
build:
context: .
dockerfile: Dockerfile
Then browse on port 80 i.e. http://localhost.
It does not seem to work on any other port. The video here helped me: https://www.docker.com/blog/docker-windows-desktop-now-kubernetes/
Related
I am new to Kubernetes. Did this so far:
vapor new hello -n
open Package.swift
ls
cd hello
open Package.swift
swift run
docker compose build
docker image ls
docker compose up app
minikube kubectl -- apply -f docker-compose.yml
minikube kubectl -- apply -f docker-compose.yml --validate=false
based on this tutorial: https://docs.vapor.codes/deploy/docker/
and video: https://www.youtube.com/watch?v=qFhzu7LolUU
but I got following error in two last line:
kukodajanos#Kukodas-MacBook-Pro hello % minikube kubectl -- apply -f docker-compose.yml
error: error validating "docker-compose.yml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
Someone said, I need to set up a deployment file?! https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment
My second goal is I would have hashicorp install in the cluster to be able to return short living secrets. I.e. secret for connection to a database which is used by the cluster. Would you give a step by step tutorial how can I do it?
// docker-compose.yml
x-shared_environment: &shared_environment
LOG_LEVEL: ${LOG_LEVEL:-debug}
services:
app:
image: hello:latest
build:
context: .
environment:
<<: *shared_environment
ports:
- '8080:8080'
# user: '0' # uncomment to run as root for testing purposes even though Dockerfile defines 'vapor' user.
command: ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
So in simple words when you try to apply a file in Kubernetes you will need to follow a basic template which make Kubernetes understand what kind of resource you are trying to create. One of this is apiVersion so please try to follow the below deployment I was not able to find the docker image for the application here you will need to just add the docker image and port number where the application runs.
If you have the Dockerfile you can build and push the image to container registry and then use the image tag to pull the same image.
Reference : How to write Kubernetes manifest file
apiVersion: apps/v1
kind: Deployment
metadata:
name: vaporapp
labels:
app: vaporapp
spec:
replicas: 2
selector:
matchLabels:
app: vaporapp
template:
metadata:
labels:
app: vaporapp
spec:
containers:
- name: vaporapp
image: signalsciences/example-helloworld:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: vapor-service
labels:
app: vaporapp
spec:
ports:
- name: http
port: 8000
targetPort: 8000
selector:
app: vaporapp
type: LoadBalancer
I'm trying to access my Golang Microservice that is running in the Kubernetes Cluster and has following Manifest..
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-application-service
namespace: email-namespace
spec:
selector:
matchLabels:
run: internal-service
template:
metadata:
labels:
run: internal-service
spec:
containers:
- name: email-service-application
image: some_image
ports:
- containerPort: 8000
hostPort: 8000
protocol: TCP
envFrom:
- secretRef:
name: project-secrets
imagePullPolicy: IfNotPresent
So to access this Deployment from the Outside of the Cluster I'm using Service as well,
And I've set up some External IP for test purposes, which suppose to forward HTTP requests to the port 8000, where my application is actually running at.
apiVersion: v1
kind: Service
metadata:
name: email-internal-service
namespace: email-namespace
spec:
type: ClusterIP
externalIPs:
- 192.168.0.10
selector:
run: internal-service
ports:
- name: http
port: 8000
targetPort: 8000
protocol: TCP
So the problem is that When I'm trying to send a GET request from outside the Cluster by executing curl -f http:192.168.0.10:8000/ it just stuck until the timeout.
I've checked the state of the pods, logs of the application, matching of the selector/template names at the Service and Application Manifests, namespaces, but everything of this is fine and working properly...
(There is also a secret config but It Deployed and also working file)
Thanks...
Making reference to jordanm's solution: you want to put it back to clusterIP and then use port-forward with kubectl -n email-namespace port-forward svc/email-internal-service 8000:8000. You will then be able to access the service via http://localhost:8000. You may also be interested in github.com/txn2/kubefwd
I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
New to Kubernetes.
To build our testing environment, I'm trying to set up a PostgreSQL instance in Kubernetes, that's accessible to other pods in the testing cluster.
The pod and service are both syntactically valid and running. Both show in the output from kubectl get [svc/pods]. But when another pod tries to access the database, it times out.
Here's the specification of the pod:
# this defines the postgres server
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: postgres
spec:
hostname: postgres
restartPolicy: OnFailure
containers:
- name: postgres
image: postgres:9.6.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
protocol: TCP
And here is the definition of the service:
# this defines a "service" that makes the postgres server publicly visible
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
type: ClusterIP
ports:
- port: 5432
protocol: TCP
I'm certain that something is wrong with at least one of those, but I'm not sufficiently familiar with Kubernetes to know which.
If it's relevant, we're running on Google Kubernetes Engine.
Help appreciated!
I'm trying to run a simple load balancing server connecting to a Deployment pod.
I've installed Docker for Mac edge version.
The problem is that when I try to make a GET request to the exposed load balancer url http://localhost:8081/api/v1/posts/health, the error appearing is:
org.apache.http.NoHttpResponseException: localhost:8081 failed to respond
When doing:
k get services
I get:
Clearly, the service is running, but localhost:8081 fails to respond, no idea why, I keep struggling with this.
My service resource:
---
apiVersion: v1
kind: Service
metadata:
name: posts-api-svc
# namespace: nginx-ingress
labels:
app: posts-api
rel: beta
env: dev
spec:
type: LoadBalancer
selector:
app: posts-api
rel: beta
env: dev
ports:
- protocol: TCP
port: 8081
My deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-api-deployment
# namespace: nginx-ingress
labels:
app: posts-api
rel: beta
env: dev
spec:
replicas: 1
selector:
matchLabels:
app: posts-api
env: dev
rel: beta
template:
metadata:
labels:
app: posts-api
env: dev
rel: beta
spec:
containers:
- name: posts-api
image: kimgysen/posts-api:latest
ports:
- containerPort: 8083
livenessProbe:
httpGet:
path: /api/v1/posts/health
port: 8083
initialDelaySeconds: 120
timeoutSeconds: 1
Should be a basic setup!
My deployment pod does not show any restarts, everything looks good:
Any advice welcome!
Note:
Edit
When using port 31082, I get the error:
org.apache.http.conn.HttpHostConnectException: Connect to
localhost:31082 [localhost/127.0.0.1] failed: Connection refused
(Connection refused)
There is no specific reason why I used port 8083.
It is because I tried nodeport first (with multiple services), now Load Balancer.
Next step will be ingress, but it didn't really work out for me the first time, and so I try to go step by step.
I used port 8081 instead of port 80 because I read somewhere that on Mac OSX port 80 is only to be used by root user.
The Service port had to correspond to the Deployment containerPort.
I can now access the api on localhost:8083.