How to get flink streaming jar to kubernetes - kubernetes

With maven am building a fat jar for my streaming app. Have to deploy the jar to a k8 cluster. Enterprise don't have internal docker hub. So my option is to build the image as part of jenkins and use the same in kub job manager config. I would appreciate if any example demonstrating the project layout and steps to deploy
Used the build.sh script from https://github.com/apache/flink/blob/release-1.7/flink-container/docker/README.md and able to convert to docker image. And using docker compose am able to get the app running. But when trying for kub as specified in https://github.com/apache/flink/blob/release-1.7/flink-container/kubernetes/README.md#deploy-flink-job-cluster am seeing image not found.

Kubernetes does not manage images, it relies on Docker for that. You can check the Docker documentation About images, containers, and storage drivers.
In Kubernetes You can use the following registries: Google Container Registry, AWS EC2 Container Registry, Azure Container Registry, IBM Cloud Container Registry and your own Private Registry
You can read the Kubernetes documentation on how to Pull an Image from a Private Registry
You can find many projects helping with the setup of your own private registry.
One of the easiest ones is the project k8s-local-docker-registry by SeldonIO.
Start/Stop private registry in cluster
start private registry
./start-docker-private-registry
stop private registry
./stop-docker-private-registry
Check if the registry catalog can be accessed and the ability to push an image.
(set -x && curl -X GET http://127.0.0.1:5000/v2/_catalog && docker pull busybox && docker tag busybox 127.0.0.1:5000/busybox && docker push 127.0.0.1:5000/busybox)

Related

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

Google cloud kubernetes start docker container image with parameters such as --runtime=nvidia

I want google cloud kubernetes to run my docker image with images. such as:
docker run -it --runtime=nvidia --volume path_to_data:/root/data my_image
How to configure these parameters?
It depends on Registry visibility.
For public images such as nginx it's simple:
Just run:
kubectl run mynginx --image=nginx
There is good explanation: Running a Docker Container on Google Container Engine - YouTube
As for Private registry, there are many cases.
According to manual: Images - Kubernetes
Using a Private Registry
Private registries may require keys to read images from them. Credentials can be provided in several ways:
Using Google Container Registry
Per-cluster
automatically configured on Google Compute Engine or Google Kubernetes Engine
all pods can read the project’s private registry
Using Google Container Registry
Kubernetes has native support for the Google Container Registry (GCR), when running on Google Compute Engine (GCE). If you are running your cluster on GCE or Google Kubernetes Engine, simply use the full image name (e.g. gcr.io/my_project/image:tag).
All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance’s Google service account. The service account on the instance will have a https://www.googleapis.com/auth/devstorage.read_only, so it can pull from the project’s GCR, but not push
So, to run image from gcr.io, you can
kubectl run myapp --image=gcr.io/my_project/image:tag

How do I update a service in the cluster to use a new docker image

I have created a new docker image that I want to use to replace the current docker image. The application is on the kubernetes engine on google cloud platform.
I believe I am supposed to use the gcloud container clusters update command. Although, I struggle to see how it works and how I'm supposed to replace the old docker image with the new one.
You may want to use kubectl in order to interact with your GKE cluster. Method of image update depends on how the Pod / Container was created.
For some example commands, see https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources
For example, kubectl set image deployment/frontend www=image:v2 will do a rolling update "www" containers of "frontend" deployment, updating the image.
Getting up and running on GKE: https://cloud.google.com/kubernetes-engine/docs/quickstart
You can use Container Registry[1] as a single place to manage Docker images.
Google Container Registry provides secure, private Docker repository storage on Google Cloud Platform. You can use gcloud to push[2] images to your registry, then you can pull images using an HTTP endpoint from any machine.
You can also use Docker Hub repositories[3] allow you share container images with your team, customers, or the Docker community at large.
[1]https://cloud.google.com/container-registry/
[2]https://cloud.google.com/container-registry/docs/pushing-and-pulling
[3]https://docs.docker.com/docker-hub/repos/

Can a Service Fabric Container project pull from Docker Hub?

I have created a new Service Fabric Container project in Visual Studio that I am trying to test by publishing to the local cluster. I have created a Windows Container image that I have run locally in Docker. I pushed the image to a private registry in Docker Hub.
When I publish the project to the local cluster, it deploys, but then I get an error:
Error event: SourceId='System.Hosting', Property='Download:1.0:1.0'.
There was an error during download.Failed to download container image docker.io/(username)/(repository)
All the examples show pulling an image from Azure Container Registry. Does Service Fabric only work with ACR, or do I have to add additional configuration to my service manifest to use a private Docker Hub registry?
Edit: also, it seems unable to find the container locally. I tried using the tagged local name of the image from the local repository (I checked using "docker images" and it is there). Same result. Service Fabric should be able to find it:
Service Fabric will pull down the image (if it's not already in the local registry) and launch a container based on the arguments you provide.
from MSDN blog on Service Fabric
It looks like the problem is that Service Fabric does not support container deployment on Windows 10 (and my dev machine is Win10, so local development/testing is out). There are notes to this effect on the Azure Documentation but I guess I didn't notice them or glossed over them...

Jenkins plugin to deploy a docker image into a k8s cluster?

I have been looking for a jenkins plugin to deploy a docker image to a kubernetes cluster using k8s api. This will access the rest api with yaml file with credentials that already configured. If there is no similar plugin, you can let me know other simple examples. Thanks for reading.
I think you are looking for a plugin's which is similar to this
https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+CI+Plugin
https://wiki.jenkins.io/display/JENKINS/Kubernetes+Pipeline+Plugin
I'm using a simple Execute Shellstep in Jenkins with the fallowing command:
kubectl --server="https://kubeapi.example.com" --token=$ACCESS_TOKEN set image deployment/deployment_name container_name=repo/image:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
You can save your $ACCESS_TOKENas a Secret text and use it as a variable in Jenkins.
The job is making the build, tag and publish a docker image to docker repo, then set the image in the kubernetes cluster.