What is the difference between dockerService and DockerServer in the kubelet source code - kubernetes

When I read the k8s source code, I found that both dockerService located in pkg/kubelet/dockershim/docker_service.go and DockerServer located in pkg/kubelet/dockershim/remote/docker_server.go seem to implement the interface of the CRI shim server.
But I don't understand the difference between the two, why do I need to distinguish between the two?
k8s version is tag 1.23.1

DockerServer simply creates dockershim grpc server
// DockerServer is the grpc server of dockershim.
type DockerServer struct {
// endpoint is the endpoint to serve on.
endpoint string
// service is the docker service which implements runtime and image services.
service DockerService
// server is the grpc server.
server *grpc.Server
}
...
// Start starts the dockershim grpc server.
func (s *DockerServer) Start() error {
glog.V(2).Infof("Start dockershim grpc server")
l, err := util.CreateListener(s.endpoint)
if err != nil {
return fmt.Errorf("failed to listen on %q: %v", s.endpoint, err)
}
// Create the grpc server and register runtime and image services.
s.server = grpc.NewServer()
runtimeapi.RegisterRuntimeServiceServer(s.server, s.service)
runtimeapi.RegisterImageServiceServer(s.server, s.service)
go func() {
// Use interrupt handler to make sure the server to be stopped properly.
h := interrupt.New(nil, s.Stop)
err := h.Run(func() error { return s.server.Serve(l) })
if err != nil {
glog.Errorf("Failed to serve connections: %v", err)
}
}()
return nil
}
DockerService is the interface implement CRI remote service server
// DockerService is the interface implement CRI remote service server.
type DockerService interface {
runtimeapi.RuntimeServiceServer
runtimeapi.ImageServiceServer
}
// **dockerService uses dockershim service to implement DockerService**.
BTW, are you sure you will use in the future? From the latest (5 days ago) news:
Kubernetes is Moving on From Dockershim: Commitments and Next Steps:
Kubernetes is removing dockershim in the upcoming v1.24 release.
If you use Docker Engine as a container runtime for your Kubernetes cluster, get ready to migrate in 1.24
Full removal is targeted in Kubernetes 1.24, in April 2022.
We'll support Kubernetes version 1.23, which includes dockershim, for another year in the Kubernetes project.

Related

What Condition Causes the Pod Log Reader Return EOF

I am using client-go to continuouslly pull log streams from kubernetes pods. Most of the time everything works as expected, until the job runs couple of hours.
The code is like below:
podLogOpts := corev1.PodLogOptions{ Follow: true, }
kubeJob, err := l.k8sclient.GetKubeJob(l.job.GetNamespace(), l.job.GetJobId())
...
podName := l.k8sclient.GetKubeJobPodNameByJobId(l.job.GetNamespace(), l.job.GetJobId())
req := l.k8sclient.GetKubeClient().CoreV1().Pods(l.job.GetNamespace()).GetLogs(podName, &podLogOpts)
podLogStream, err := req.Stream(context.TODO())
...
for {
copied, err := podLogStream.Read(buf)
if err == io.EOF {
// here is place where error happens
// usually after many hours, the podLogStream return EOF.
// I checked the pod status it is still running and keeps printing data to pod stdout. why would this happend???
break
}
...
}
The podLogStream returns EOF about 3-4 hours later. But I checked the pod status and found pod is still running and the service inside keeps printing data to the stdout. So why would this happend? How to fix it?
UPDATE
I found that every 4 hours the pod stream api -- read -- would return EOF so I have to make the goroutine sleep and retry a second later, by recreating the pogLogStream and reading logs from new stream object. It works. But why would this happen??
When you contact logs endpoint what happens is that apiserver is forwarding your request to the kubelet which is hosting your pod. Kubelet server then start streaming content of the log file to the apiserver and later to your client. Since it is streaming logs from the file and not from the stdout directly it may happen that log file is rotated by container log manager and as consequence you receive EOF and need to reinitialize the stream.

Controller get wrong namespace name for multipe operators instances in different namespaces

I developed a k8s Operator, after I deploy the first Operator in first namespace, it works well. Then I deploy the 2nd Operator in second namespace, I saw the 2nd controller to get the request that's namespace still is the first name, but the expected namespace should be second.
Please see the following code, when I play with second operator in the second namespace, request's namespace still is the first namespace.
func (r *AnexampleReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("Anexample", request.NamespacedName)
instance := &v1alpha1.Anexample{}
err := r.Get(context.TODO(), request.NamespacedName, instance)
if err != nil {
if errors.IsNotFound(err) {
log.Info("Anexample resource not found. Ignoring since object must be deleted.")
return reconcile.Result{}, nil
}
log.Error(err, "Failed to get Anexample.")
return reconcile.Result{}, err
}
I suspect it might be related to election, but I don't understand them.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
what happen in Controller? How to fix it?
We are seeing a similar issue. The issue is about getting the wrong namespace. Might be a bug in controller-runtime.
request.NamespacedName from controller-runtime is returning the wrong namespace.
request.Namespaced depends on the namespace of the custom resource which you are deploying.
Operators are deployed in namespaces, but can still be configured to listen to custom resources in all namespaces.
This should not be related to the election but with the way you setup your manager.
You didn't specify a namespace in the ctrl.Options for the manager, so it will listen to CR changes in all namespaces.
If you want your operator to only listen to one single namespace, pass the namespace to to manager.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
Namesace: "<namespace-of-operator-two>",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
See also here: https://developers.redhat.com/blog/2020/06/26/migrating-a-namespace-scoped-operator-to-a-cluster-scoped-operator#migration_guide__namespace_scoped_to_cluster_scoped

How to invoke the Pod proxy verb using the Kubernetes Go client?

The Kubernetes remote API allows HTTP access to arbitrary pod ports using the proxy verb, that is, using an API path of /api/v1/namespaces/{namespace}/pods/{name}/proxy.
The Python client offers corev1.connect_get_namespaced_pod_proxy_with_path() to invoke the above proxy verb.
Despite reading, browsing, and searching the Kubernetes client-go for some time, I'm still lost how to do the same with the goclient what I'm able to do with the python client. My other impression is that I may need to dive down into the rest client of the client changeset, if there's no ready-made API corev1 call available?
How do I correctly construct the GET call using the rest client and the path mentioned above?
As it turned out after an involved dive into the Kubernetes client sources, accessing the proxy verb is only possible when going down to the level of the RESTClient and then building the GET/... request by hand. The following code shows this in form of a fully working example:
package main
import (
"fmt"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
clcfg, err := clientcmd.NewDefaultClientConfigLoadingRules().Load()
if err != nil {
panic(err.Error())
}
restcfg, err := clientcmd.NewNonInteractiveClientConfig(
*clcfg, "", &clientcmd.ConfigOverrides{}, nil).ClientConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(restcfg)
res := clientset.CoreV1().RESTClient().Get().
Namespace("default").
Resource("pods").
Name("hello-world:8000").
SubResource("proxy").
// The server URL path, without leading "/" goes here...
Suffix("index.html").
Do()
if err != nil {
panic(err.Error())
}
rawbody, err := res.Raw()
if err != nil {
panic(err.Error())
}
fmt.Print(string(rawbody))
}
You can test this, for instance, on a local kind cluster (Kubernetes in Docker). The following commands spin up a kind cluster, prime the only node with the required hello-world webserver, and then tell Kubernetes to start the pod with said hello-world webserver.
kind create cluster
docker pull crccheck/hello-world
docker tag crccheck/hello-world crccheck/hello-world:current
kind load docker-image crccheck/hello-world:current
kubectl run hello-world --image=crccheck/hello-world:current --port=8000 --restart=Never --image-pull-policy=Never
Now run the example:
export KUBECONFIG=~/.kube/kind-config-kind; go run .
It then should show this ASCII art:
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>

Kubernetes crd failed to be created using go-client interface

I created a Kubernetes CRD following the example at https://github.com/kubernetes/sample-controller.
My controller works fine, and I can listen on the create/update/delete events of my CRD. Until I tried to create an object using go-client interface.
This is my CRD.
type MyEndpoint struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
}
I can create the CRD definition and create object using kubectl without any problems. But I got failure when I use following code to create the object.
myepDeploy := &crdv1.MyEndpoint{
TypeMeta: metav1.TypeMeta{
Kind: "MyEndpoint",
APIVersion: "mydom.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: podName,
Labels: map[string]string{
"serviceName": serviceName,
"nodeIP": nodeName,
"port": "5000"
},
},
}
epClient := myclientset.MycontrollerV1().MyEndpoints("default")
epClient.Create(myepDeploy)
But I got following error:
object *v1.MyEndpoint does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message
I take a look at other standard types, I don't see if they implemented such interface. I searched on google, but not getting any luck.
Any ideas? Please help. BTW, I am running on minikube.
For most common types and for simple types marshalling works out of the box. In case of more complex structure, you may need to implement marshalling interface manually.
You may try to comment part of the MyEndpoint structure to find out what exactly caused the problem.
This error is occurred when your client epClient trying to marshal the MyEndpoint object to protobuf. This is because of your rest client config. Try setting Content Type is "application/json".
If you are using below code to generate config, then change the content type.
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
glog.Fatalf("Error building kubeconfig: %s", err.Error())
}
cfg.ContentType = "application/json"
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
glog.Fatalf("Error building kubernetes clientset: %s", err.Error())
}

AWS-ECS - Communication between containers - Unknown host error

I have two Docker containers.
TestWeb (Expose: 80)
TestAPI (Expose: 80)
Testweb container calls TestApi container. Host can communicate with TestWeb container from port 8080. Host can communicate with TestApi using 8081.
I can get TestWeb to call TestApi in my dev box (Windows 10) but when I deploy the code to AWS (ECS) I get "unknown host" exception. Both the containers work just fine and I can call them individually. But when I call a method that internally makes a Rest call using HttpClient to a method in Container2, it gives the error:
An error occurred while sending the request. ---> System.Net.Http.CurlException: Couldn't resolve host name.
Code:
using (var client = new HttpClient())
{
try
{
string url = "http://testapi/api/Tenant/?i=" + id;
var response = client.GetAsync(url).Result;
if (response.IsSuccessStatusCode)
{
var responseContent = response.Content;
string responseString = responseContent.ReadAsStringAsync().Result;
return responseString;
}
return response.StatusCode.ToString();
}
catch (HttpRequestException httpRequestException)
{
return httpRequestException.Message;
}
}
The following are the things I have tried:
The two containers (TestWeb, TestAPI) are in the same Task definition in AWS ECS. When I inspect the containers, I get the IP address of each of the containers. I can ping container2 from container1 with their IP address. But I can't ping using container2 name. It gives me "unknown host" error.
It appears ECS doesn't use legit docker-compose under the hood, however, their implementation does support the Compose V2 "links" feature.
Here is a portion of my compose file I just ran on ECS that needed this same functionality AND had the same "could not resolve host" error you were getting. The "links" I added fixed my hostname resolution issue on Elastic Container Service!
version: '3'
services:
appserver:
links:
- database:database
- socks-proxy:socks-proxy
This allowed my appserver to communicate TO the database and socks-proxy hostnames. The format is "SERVICE:ALIAS" and it is fine to keep them both the same as a default practice.
In your example it would be:
version: '3'
services:
testapi:
links:
- testweb:testweb
testweb:
links:
- testapi:testapi
AWS does not use Docker compose but provides a interface to add Task Definitions.
Containers that need to communicate together can be put on the same Task definition. Then we can also specify in the links section the containers that will be called from the current container. Each container can be given its container name on the "Host" section of Task Definition. Once I added the container name to the "Host" field, Container1 (TestWeb) was able to communicate with Container2 (TestAPI).