Run .NET container on Windows Server 2022 - kubernetes

Both host and container cmd /c ver return 10.0.20348.230 but still Kubernetes complains that The container operating system does not match the host operating system. Any ideas?
apiVersion: v1
kind: Pod
metadata:
name: aspnet-test
spec:
containers:
- image: mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2022
name: aspnet-test
ports:
- containerPort: 80
name: http
protocol: TCP
nodeSelector:
kubernetes.io/os: windows
docker run -it -p 5000:80 mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2022 works. The image was downloaded by this command, so Kubernetes decided about the error before fetching it.
Retested with Microk8s and Kubernetes 1.22.3.

Related

Access NodePort Service Outside Kubeadm K8S Cluster

I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

How to configure an echo pod on different port instead of port 80 in kubernetes

apiVersion: v1
kind: Pod
metadata:
name: echo-pod
namespace: echo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 81
Tried to connect on port 81, by running above yaml file, but it is still connecting on port 80.
Checked connection using telnet
The ip of echo -pod is 192.168.211.1
Able to connect echo pod from busybox pod(which is already exists) on port 80 but not on port 81. you can observe following
root#ip-172-31-16-143:~# kubectl exec busybox -- telnet 192.168.211.1 80
Connected to 192.168.211.1
root#ip-172-31-16-143:~# kubectl exec busybox -- telnet 192.168.211.1 81
telnet: can't connect to remote host (192.168.211.1): Connection refused
command terminated with exit code 1
By defining containerPort: 81 on your pod template, you are only changing the exposed container port, but your Nginx server will still listen on port 80 (as it is configured by default).
You need to change the Nginx listen configuration to match your new exposed port.
Different from mostly docker implementations, Nginx doesn't support such configs by using environment variables (see Using environment variables in Nginx configuration on their Docker Hub page).
If you wish to adapt the Nginx default configuration, you need to create a new nginx.conf with the listen 81; config, then replace the original using COPY in your Dockerfile to create a custom image FROM nginx.
If your prefer an "one-line workaround" still using the original Nginx image, you can change the command/args to replace the listen config on every start:
containers:
- name: nginx
image: nginx
command: ["/bin/sh","-c"]
args: ["sed -i 's/listen .*/listen 81;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
ports:
- containerPort: 81
Use Kubernetes service(cluster IP type) for connecting from one pod to another pod.
Create a service as below.
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: echo
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 81
targetPort: 80
Add a label app: nginx to the nginx pod so that the service selects the pod as backend.
apiVersion: v1
kind: Pod
metadata:
name: echo-pod
namespace: echo
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Then you can use the cluster IP of the service or better yet DNS of the service and port 81 to connect to the nginx pod from another pod.
To check the cluster IP run kubectl get svc my-service -n echo
The DNS will be my-service.echo.svc.cluster.local
Then you can use clusterip:81 or my-service.echo.svc.cluster.local:81 to connect to nginx pod from another pod.

Kubernetes Pod can't connect to local postgres server with Hasura image

I'm following this tutorial to connect a hasura kubernetes pod to my local postgres server.
When I create the deployment, the pod's container fails to connect to postgres (CrashLoopBackOff and keeps retrying), but doesn't give any reason why. Here are the logs:
{"type":"pg-client","timestamp":"2020-05-03T06:22:21.648+0000","level":"warn","detail":{"message":"postgres connection failed, retrying(0)."}}
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasura
hasuraService: custom
name: hasura
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hasura
template:
metadata:
creationTimestamp: null
labels:
app: hasura
spec:
containers:
- image: hasura/graphql-engine:v1.2.0
imagePullPolicy: IfNotPresent
name: hasura
env:
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://USER:#localhost:5432/my_db
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
ports:
- containerPort: 8080
protocol: TCP
resources: {}
I'm using postgres://USER:#localhost:5432/MY_DB as the postgres url - is "localhost" the correct address here?
I verified that the above postgres url works when I try (no password):
> psql postgres://USER:#localhost:5432/my_db
psql (12.2)
Type "help" for help.
> my_db=#
How else can I troubleshoot it? The logs aren't very helpful...
If I got you correctly, the issue is that the Pod (from "inside" the Minikube ) can not access Postgres installed on Host machine (the one that runs Minikube itself) via localhost.
If that is the case, please check this thread .
... Minikube VM can access your host machine’s localhost on 192.168.99.1 (127.0.0.1 from Minikube would still be a Minicube's localhost).
Technically, for the Pod the localhost is Pod itself. The Host machine and Minikube are connected via bridge. You can find out exact ip addresses and routes with the infconfig and route -n on your Minikube Host.

Getting started with Kubernetes - deploy docker compose

I am trying to follow the instructions in this tutorial: https://docs.docker.com/docker-for-windows/kubernetes/#use-docker-commands. I have followed these steps:
1) Enable Kubernetes in Docker Desktop.
2) Create a simple asp.net core 3.1 app in Visual Studio 2019 and add container orchestration support (Docker Compose).
3) Run the app in Visual Studio 2019 to confirm it runs successfully in Docker.
4) Run the following command in DOS: docker-compose build kubernetesexample
5) Run the following command in DOS: docker stack deploy --compose-file docker-compose.yml mystack
6) Run the following command in DOS: kubectl get services. Here is the result:
How do I browse to my app? I have tried to browse to: http://localhost:5100 and http://localhost:32442.
Here is my docker-compose.yml:
services:
kubernetesexample:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "45678:80"
[1]: https://i.stack.imgur.com/FAkAZ.png
Here is the result of running: kubectl get svc kubernetesexample-published -o yaml:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-03-14T17:51:41Z"
labels:
com.docker.service.id: mystack-kubernetesexample
com.docker.service.name: kubernetesexample
com.docker.stack.namespace: mystack
name: kubernetesexample-published
namespace: default
ownerReferences:
- apiVersion: compose.docker.com/v1alpha3
blockOwnerDeletion: true
controller: true
kind: Stack
name: mystack
uid: 75f037b1-661c-11ea-8b7c-025000000001
resourceVersion: "1234"
selfLink: /api/v1/namespaces/default/services/kubernetesexample-published
uid: a8e6b35a-35d1-4432-82f7-108f30d068ca
spec:
clusterIP: 10.108.180.197
externalTrafficPolicy: Cluster
ports:
- name: 5100-tcp
nodePort: 30484
port: 5100
protocol: TCP
targetPort: 5100
selector:
com.docker.service.id: mystack-kubernetesexample
com.docker.service.name: kubernetesexample
com.docker.stack.namespace: mystack
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost
Please note the port has now changed:
Update
Service - An abstract way to expose an application running on a set of Pods as a network service.
Rather use Kubernetes documentation. They've interactive, in browser, examples. I see you tried to use LoadBalancer, this must be supported on cloud provider or properly set up environments. All publishing services are summered here. Try using NodePort, simple configuration 'd be eg.:
apiVersion: v1
kind: Service
metadata:
name: np-kubernetesexample
labels:
app: kubernetesexample
spec:
type: NodePort
ports:
port: 5100
protocol: TCP
targetPort: 5100
selector:
app: kubernetesexample
... from what I get gather from provided SCs and description, please check port and labels. If successful, application should be available on localhost:3xxxx, 2nd port described under PORTS when you type kubectl get services, xxxx:3xxxx/TCP.
It seems to work if I change my docker-compose to this:
version: '3.4'
services:
kubernetesexample:
image: kubernetesexample
ports:
- "80:80"
build:
context: .
dockerfile: Dockerfile
Then browse on port 80 i.e. http://localhost.
It does not seem to work on any other port. The video here helped me: https://www.docker.com/blog/docker-windows-desktop-now-kubernetes/

Kubernetes pod can't access other pods exposed by a service

New to Kubernetes.
To build our testing environment, I'm trying to set up a PostgreSQL instance in Kubernetes, that's accessible to other pods in the testing cluster.
The pod and service are both syntactically valid and running. Both show in the output from kubectl get [svc/pods]. But when another pod tries to access the database, it times out.
Here's the specification of the pod:
# this defines the postgres server
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: postgres
spec:
hostname: postgres
restartPolicy: OnFailure
containers:
- name: postgres
image: postgres:9.6.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
protocol: TCP
And here is the definition of the service:
# this defines a "service" that makes the postgres server publicly visible
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
type: ClusterIP
ports:
- port: 5432
protocol: TCP
I'm certain that something is wrong with at least one of those, but I'm not sufficiently familiar with Kubernetes to know which.
If it's relevant, we're running on Google Kubernetes Engine.
Help appreciated!