Private docker.io registry in microk8s - kubernetes

I have issue with microk8s hitting rate limit for docker.io registry
ctr: failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/kube-controllers/manifests/sha256:bf58609ff39089533b80ff2a10fffd1302346f153c66e24d0572fb8b198daea1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
I wanted to configure private repository authorization for docker.io. I've followed following instruction
It looks like that it's not working with docker.io registry
I've modified configuration file
/var/snap/microk8s/current/args/containerd-template.toml
with following content
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."docker.io".auth]
username = ""
password = ""
auth = ""
email = ""
However it looks like this is not working for docker.io registry
I'm aware of this solution, however if I recall correctly this needs to be applied to every namespace separately. I'm looking for a one-shot solution for whole kubernetes cluster.
Is there such solution, or kubernetes secrets are the only way to go ?

Related

Retrieve auto scaling group instance ip's and provide it to ansible

Im currently developing terraform script and ansible roles in order to install mongodb with the replication. im using auto scaling group and i need to pass, ec2 instance private ip's to ansible as extra vars. is there any way to do that?
When it's come to rs.initiate() is there any way to add ec2 private ip to mongo cluster when terraform creating the instances.
Not really sure about how it's done in ASGs, probably a combination of user-data and EC2 metadata would be helpful.
But I do it as below in case we have a fixed number of nodes. Posting this answer as it can be helpful to someone in some way.
Using EC2 dynamic inventory scripts.
Ref - https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html
This is basically a python script i.e ec2.py which gets the instance private IP using tags etc. It comes with a config file named ec2.ini.
Tag your instance in TF script (you add a role tag) -
resource "aws_instance" "ec2" {
....
tags = "${merge(var.tags, map(
"description","mongodb-node",
"role", "mongodb-node",
"Environment", "${local.env}",))}"
}
output "ip" {
value = ["${aws_instance.ec2.private_ip}"]
}
Get the instance private IP in playbook -
- hosts: localhost
connection: local
tasks:
- debug: msg="MongoDB Node IP is - {{ hostvars[groups['tag_role_mongodb-node'][0]].inventory_hostname }}"
Now run the playbook using TF null_resource -
resource null_resource "ansible_run" {
triggers {
ansible_file = "${sha1(file("${path.module}/${var.ansible_play}"))}"
}
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i ./ec2.py --private-key ${var.private_key} ${var.ansible_play}"
}
}
You got to make sure AWS related environment variables are present/exported for ansible to fetch AWS EC2 metadata. Also make sure ec2.py is executable.
If you want to get the private IP, change the following config in ec2.ini -
destination_variable = private_ip_address
vpc_destination_variable = private_ip_address

Run kubernetes build from terraform

I'm trying to make a simple test to build a simple nginx on kubernetes from terraform.
This is the first time working terraform.
This is the basic terraform file:
provider "kubernetes" {
host = "https://xxx.xxx.xxx.xxx:8443"
client_certificate = "${file("~/.kube/master.server.crt")}"
client_key = "${file("~/.kube/master.server.key")}"
cluster_ca_certificate = "${file("~/.kube/ca.crt")}"
username = "xxxxxx"
password = "xxxxxx"
}
resource "kubernetes_service" "nginx" {
metadata {
name = "nginx-example"
}
spec {
selector {
App = "${kubernetes_pod.nginx.metadata.0.labels.App}"
}
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
resource "kubernetes_pod" "nginx" {
metadata {
name = "nginx-example"
labels {
App = "nginx"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "example"
port {
container_port = 80
}
}
}
}
I'm getting the following error after running the terraform apply.
Error: Error applying plan:
1 error(s) occurred:
kubernetes_pod.nginx: 1 error(s) occurred:
kubernetes_pod.nginx: the server has asked for the client to provide credentials (post pods)
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
I have admin permissions on kubernetes and everything is working correctly.
But for some reason I'm getting that error.
What I'm doing wrong?
Thanks
Regarding #matthew-l-daniel question
When I'm only using the username/password I get this error:
Error: Error applying plan:
1 error(s) occurred:
kubernetes_pod.nginx: 1 error(s) occurred:
kubernetes_pod.nginx: Post https://xxx.xxx.xxx.xxx:8443/api/v1/namespaces/default/pods:
x509: certificate signed by unknown authority
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
I tried using the server name or the server ip and got the same error everytime.
When using the certs I got the error from the original post, regarding the "credentials"
I forgot to mention that this is an openshift installation. I don't believe it will have any impact in the end, but I thought I should mention it.
The solution was rather simple, I was using the master crt and key from openshift on terraform.
Then I tested it using the admin crt and key from openshift and it worked.
Aside from the official kubernetes provider documentation suggesting only certificate or basic (user/pass) should be required, this sounds like an OpenShift issue. Have you been able to obtain any logs from the OpenShift cluster?
Some searching links the message you are seeing to some instability bugs within Kubernetes wherein the kubelet does not properly register after a reboot. I would manually confirm the node shows as Ready in OpenShift before you attempt a deployment, as until this occurs Terraform will not be able to interact with it.
If in fact the node is not Ready, Terraform is just surfacing the underlying error passed back from OpenShift.
Separately, the error you are seeing when trying to authenticate using purely certificate parameters is indicative of a misconfiguration. A similar question was raised on the Kubernetes GitHub, and the suggestion there was to investigate the Certificate Authority loaded on to the cluster.

Securing a REST call to Vault Secrets management

Been trying to figure out how to do this for awhile. Essentially, Vault does not have a secure option for its REST calls. I want to make these rest calls encrypted from as close between point a and b as possible. My thoughts have been the following:
Use an SSH tunnel
Use a TLS tunnel like Stunnel
I currently have Vault in a Docker container, so that’s something else to mention. Has anyone encountered this situation, and how did you deal with it?
UPDATE: So, using the Python API (HVAC), I am getting the following error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='0.0.0.0',
port=8200): Max retries exceeded with url: /v1/secret (Caused by
SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_record',
'wrong version number')],)",),))
Using the following commands:
import os
import hvac
client = hvac.Client(url='https://0.0.0.0:8200', token='my-token-here')
Vault has TLS enabled by default, thus all your REST calls are encrypted already. If you are having trouble using https, have a look at the documentation of VAULT_CACERT and VAULT_CAPATH environment variables.
from vault's documentation.
VAULT_CACERT
Path to a PEM-encoded CA certificate file on the local disk. This file
is used to verify the Vault server's SSL certificate. This environment
variable takes precedence over VAULT_CAPATH.
VAULT_CAPATH Path to a directory of PEM-encoded CA certificate files
on the local disk. These certificates are used to verify the Vault
server's SSL certificate.
You can use tools like tcpdump or wireshark to make sure that your requests are indeed encrypted.
To elaborate for Vault running in a container, you need to create a configuration file for Vault that contains something similar to this this (Chef/Ruby code):
config_content = %(
"storage": {
...
},
"default_lease_ttl": "768h",
"max_lease_ttl": "8766h",
"listener": [
{"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 0,
"tls_cert_file": "/vault/certs/my-cert-combined.pem",
"tls_key_file": "/vault/certs/my-cert.key"
}}],
"log_level": "info"
)
Especially the listener portion. Make your backend storage whatever you want to use (not the Dev default of in-memory!).
Note you will need to get a valid certificate and its private key also in the volume bound into the container.
Store this configuration file in a directory that gets bound inside the container to the path /vault/config. I use /var/vault/config on my host. For example (more Ruby/Chef):
docker_container 'vault' do
container_name 'vault'
tag 'latest'
port '8200:8200'
cap_add ['IPC_LOCK']
restart_policy 'always'
volumes ['/var/vault:/vault']
command 'vault server -config /vault/config'
action :run_if_missing
end
That command tells Vault to look in /vault/config and it should find your config file there, with a .json extension. Note it is important to have the config file listener->tcp->address be 0.0.0.0, rather than 127.0.0.1, because Vault will not resolve external accesses properly.
Then Vault will startup with TLS encryption on all transactions. Define VAULT_ADDR to have https://your-host.com:8200 and away you go.
In my case, I've been testing it on my local environment. So instead of calling the secured https: https://localhost:8200, I called regular http: http://localhost:8200.
This solved the error.

Creating Kubernetes Endpoint in VSTS generates error

What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed.
Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint?
This is followed by TsuyoshiUshio/KubernetesTask issue 35
I try to reproduce, however, I can't do it.
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download task/kubeconfig.
Workaround might be like this:
kubectl version in your local machine and check the current server/client version
specify the same version as the server on the download task. (by default it is 1.5.2)
See the log of your release pipeline which is fail, you can see which kubectl command has been executed, do the same thing on your local machine with fitting your local pc's environment.
The point is, before go to the VSTS, download the kubectl by yourself.
Then, put the kubeconfg on the default folder like ~/.kube/config or set environment variables KUBECONFIG to the binary.
Then execute kubectl get nodes and make sure if it works.
My kubeconfig is different format with yours. If you use AKS, az aks install-cli command and az aks get-credentials command.
Please refer https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough .
If it works locally, the config file must work on the VSTS task environment. (or this task or VSTS has a bug)
I had the same problem on VSTS.
Here is my workaround to get a Service Connection working (in my case to GCloud):
Switched Authentication to "Service Account"
Run the two commands told by the info icon next to the fields Token and Certificate: "Token to authenticate against Kubernetes.
Use the ‘kubectl get serviceaccounts -o yaml’ and ‘kubectl get secret
-o yaml’ commands to get the token."
kubectl get secret -o yaml > kubectl-secret.yaml
Search inside the the file kubectl-secret.yaml the values ca.crt and token
Enter the values inside VSTS to the required fields
The generated config I was using had a duplicate line, removing this corrected the issue for me.
users:
- name: cluster_stuff_here
- name: cluster_stuff_here

Terraform with Google Container Engine (Kubernetes): Error executing access token command "...\gcloud.cmd"

I'm trying to deploy some module (Docker image) to google Google Container Engine. What I got in my Terraformconfig file:
terraform.tf
# Google Cloud provider
provider "google" {
credentials = "${file("google_credentials.json")}"
project = "${var.google_project_id}"
region = "${var.google_region}"
}
# Google Container Engine (Kubernetes) cluster resource
resource "google_container_cluster" "secureskye" {
name = "secureskye"
zone = "${var.google_kubernetes_zone}"
additional_zones = "${var.google_kubernetes_additional_zones}"
initial_node_count = 2
}
# Kubernetes provider
provider "kubernetes" {
host = "${google_container_cluster.secureskye.endpoint}"
username = "${var.google_kubernetes_username}"
password = "${var.google_kubernetes_password}"
client_certificate = "${base64decode(google_container_cluster.secureskye.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.secureskye.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.secureskye.master_auth.0.cluster_ca_certificate)}"
}
# Module UI
module "ui" {
source = "./modules/ui"
}
My problem is: google_container_cluster was created successfully, but it fails on module ui creation (which contains 2 resource kubernetes_service and kubernetes_pod) with error
* kubernetes_pod.ui: Post https://<ip>/api/v1/namespaces/default/pods: error executing access token command "<user_path>\\AppData\\Local\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd config config-helper --format=json": err=exec: "<user_path>\\AppData\\Local\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd": file does not exist output=
So, questions:
1. Do I need gcloud + kubectl installed? Even though google_container_cluster was created successfully before I install gcloud or kubectl installed.
2. I want to use independent, separated credentials info, project, region from the one in gcloud, kubectl CLI. Am I doing this right?
I have been able to reproduce your scenario running the Terraform config file you provided (except the Module UI part), in a Linux machine, so your issue should be related to that last part of the code.
Regarding your questions:
I am not sure, because I tried from Google Cloud Shell, and both gcloud and kubectl are already preinstalled there, although I would recommend you to install them just to make sure that is not the issue here.
For the credentials part, I added two new variables to the variables.tf Terraform configuration file, as in this example (those credentials do not need to be the sames as in gcloud or kubectl:
Use your prefered credentials in this case.
variable "google_kubernetes_username" {
default = "<YOUR_USERNAME>"
}
variable "google_kubernetes_password" {
default = "<YOUR_PASSWORD>"
}
Maybe you could share more information regarding what can be found in your Module UI, in order to understand which file does not exist. I guess you are trying the deployment from a Windows machine, as for the notation in the paths to your files, but that should not be an important issue.