from this example. DockerOperator has the docker_url parameter which is "URL of the host running the docker daemon.".
But when i run in Kubernetes engine on Google Cloud Platform, how can i find this docker_url on Kubernetes?
You can run the following command to find out the docker url:
$docker-machine url [docker_machine_name]
Docker machine is not installed on the container images by default. You will have to install docker-machine manually by following these steps.
You will also have to use the Ubuntu image if you would like this functionality. I tried to install docker machine using a cos image, and it does not work since the image does not have the necessary dependencies.
Related
In the website of visualstudio at the following link:
https://code.visualstudio.com/docs/remote/remote-overview
the website says that VS Code Remote Development can connect in 3 ways:
Remote SSH
Remote - Containers
Remote - WSL
In the link about Containers the page says:
Linux: Docker CE/EE 18.06+ and Docker Compose 1.21+. (The Ubuntu snap package is not supported.)
But also says:
Other glibc based Linux containers may work if they have needed Linux prerequisites.
So it is unclear if the extension works with non-Docker containers.
Is it possible to use this extension to develop software inside LXC containers(locally or remotely)?
LXC and LXD are system containers, therefore you can definitely use the Remote SSH method.
The Containers method has been designed for Docker. It might be possible to get it to work with LXD with an appropriate devcontainer.json, but you would need to figure this one out. I could not find an existing guide for this.
One could do with Ansible and LXD , infact it would be nicer.
I want to start practicing with k8s for the CKAD exam. I run on ubuntu 18.04.
I noticed everywhere that I need to download Virtualbox for minikube. I believe that VB is needed in case I don't start my cluster with a driver but if I use the Docker driver when I start my cluster shouldn't that be enough? Is microk8s a better option?
It seems that the preferred way is use --driver=docker driver instead of --driver=none for minikube, although it is technically not baremetal as it is significantly easier to configure and does not require root access. The ‘none’ driver is recommended for advanced users only. (info below from https://minikube.sigs.k8s.io/docs/drivers/docker/)
docker
Overview
The Docker driver allows you to install Kubernetes into an existing Docker install. On Linux, this does not require virtualization to be enabled.
Requirements
Install Docker 18.09 or higher
amd64 or arm64 system.
Usage
Start a cluster using the docker driver:
minikube start --driver=docker
To make docker the default driver:
minikube config set driver docker
Yes you can. Check here.
Minikube also supports a --driver=none option that runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker and a Linux environment but not a hypervisor.
Jus run
$ minikube start
Caution: If you use the none driver, some Kubernetes components run as privileged containers that have side effects outside of the Minikube environment. Those side effects mean that the none driver is not recommended for personal workstations
I am exploring and learning about containers and kubernetes using podman and minikube on a linux workstation. I use podman to build images on the workstation and would like to deploy these images in minikube also running on the workstation using the kvm2 virtual machine driver. I also start minikube using the CRI-O container runtime.
What are efficient workflows to deploy these images from the workstation to minikube in this scenario? Docker is not running on the minikube VM so the reusing the Docker daemon as described in the minikube documentation is not an option. Sharing the host file system with minikube also appears to not be viable at this time when using kvm2.
Is running a local registry that is visible to both the workstation and the minikube vm the best option? Answers to How to use local docker images with Minikube? and (Kubernetes + Minikube) can't get docker image from local registry appear to offer good solutions for configuring a local registry.
Would skopeo be a solution?
Edit: this is a nice post describing how to set up a registry using podman: https://computingforgeeks.com/create-docker-container-registry-with-podman-letsencrypt/
thank you
Brad
Minikube documentation provides the foundation for a potential workflow at https://minikube.sigs.k8s.io/docs/tasks/docker_registry/. In order to use podman in lieu of docker I did the following
Start minikube, as instructed, with the --insecure-registry flag. I specifically use
minikube start --network-plugin=cni --enable-default-cni --bootstrapper=kubeadm --container-runtime=cri-o --cpus 4 --memory 4g --insecure-registry "192.168.39.0/24"
Enable the minikube registry addon.
minikube addons enable registry
Configure podman to use the insecure minikube registry by adding the registry to the insecure registries section of /etc/containers/registries.conf. This section now looks like
[registries.insecure]
registries = ['192.168.39.175:5000']
where 192.168.39.175 is the minikube ip. This ip may change following minikube restarts.
Follow the build, push and run commands in https://minikube.sigs.k8s.io/docs/tasks/docker_registry/ substituting podman for docker. This assumes the test-img container file exists.
Build: podman build --tag $(minikube ip):5000/test-img .
Push: podman push $(minikube ip):5000/test-img
Run: kubectl run test-img --image=$(minikube ip):5000/test-img
This worked but suffers from a serious complication: there is no apparent way at this time to set the IP address for the minikube VM when using kvm2. The IP will always be in the 192.168.39.0/24 subnet but that is the only certainty. Each time minikube is started the IP address of the registry will change which has significant implications for podman and the workflow in general.
More to come an another solution.
I'm wondering how can install a package inside the minikube VM. I need some tools.
I have tried the /bin/toolbox container, but It does not have internet conexion.
[root#docker-fedora-24 ~]# dnf update --verbose
cachedir: /var/cache/dnf
DNF version: 1.1.9
Cannot download 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64 [Could not resolve host: mirrors.fedoraproject.org].
Error: Failed to synchronize cache for repo 'updates'
I have tried the same toolbox script in my computer and it is properly working.
What configuration parameters I'm missing in minikube or systemd-nspaw?
Or how can I cook a customized minikube VM?
Thanks a lot
You can run minicube without VM on your local docker (if you use linux):
minikube start --vm-driver=none
A alternative, run toolbox with docker run --net=host ... to make network for container more transparent. Troubleshoot your internet connection with nslookup, traceroute/tracepath, curl -v, ifconfig.
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:Ch04:_Simple_Network_Troubleshooting#.WfY1xGi0OUk
Minikube is not meant to be tweaked. The advised method is to prepare a helm chart for your application. As part of the helm chart you can add whatever tool you need in your docker file... Including make... Then you can install or upgrade your package in kubernetes/minikube using helm.
I had a similar problem when I wanted to use tcpdump in the minikube VM.
I ended up using minikube mount SRC-dir:DST-dir to mount the host folder inside the VM and copying the tcpdump binary along with dependent libs (libcrypto and libpcap) to the mount point.
Then I executed tcpdump from the minikube VM and it worked.
Note: My host arch and the minikube VM arch (x86_64) was the same.
Note also: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:DST-dir has to be done.
I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.