Invalid Slug version in terraform - kubernetes

I am trying to create a kubernetes cluster with terraform but it shows me an error, I changed the value of the version on different occasions but it did not work.
resource "digitalocean_kubernetes_cluster" "lox" {
name = "lox"
region = "nyc1"
version = "1.13.4-do.0"
node_pool {
name = "worker-pool"
size = "s-1vcpu-2gb"
node_count = 2
}
This is the error:
Error: Error creating Kubernetes cluster: POST https://api.digitalocean.com/v2/kubernetes/clusters: 422 validation error: invalid version slug
on 01-cluster.tf line 1, in resource "digitalocean_kubernetes_cluster" "lox":
1: resource "digitalocean_kubernetes_cluster" "lox" {
how can i solve it?

Use below command to grab the latest and valid version slug and use it in version
doctl kubernetes options versions

The version you're setting does not exist.
Check here: https://www.digitalocean.com/docs/kubernetes/changelog/ for all the versions available, or using the doctl command line.
If you're targeting 1.13, you may use 1.13.12-do.8 as the version, released on 22/06/2020.

i wasn't able to find the version in changelog, found it here https://slugs.do-api.dev/ (tab "Kubernetes versions")

doctl kubernetes options versions
Slug Kubernetes Version Supported Features
1.24.4-do.0 1.24.4 cluster-autoscaler, docr-integration, ha-control-plane, token-authentication
1.23.10-do.0 1.23.10 cluster-autoscaler, docr-integration, ha-control-plane, token-authentication
1.22.13-do.0 1.22.13 cluster-autoscaler, docr-integration, ha-control-plane, token-authentication

Related

AWS Personalize - HPO - solutionConfig

I have incorporated the solutionConfig as part of HPO in AWS personlaize service.
solutionConfig = {
"optimizationObjective": {
"itemAttribute": "ITEM_WEIGHT",
"objectiveSensitivity": "HIGH"
},
I am getting the following error
Unknown parameter in solutionConfig: "optimizationObjective", must be one of: eventValueThreshold, hpoConfig, algorithmHyperParameters, featureTransformationParameters, autoMLConfig]
It looks like you may be using a version of the AWS SDK that does not include support for the optimizationObjective parameter of the solution config. Check to make sure that you're using the latest version of the AWS SDK.

ansible dynamic inventory kubernetes

I am trying to use the kubernetes plugin in ansible to be able to use a dynamic inventory based on my k8 cluster. I have followed this doc https://docs.ansible.com/ansible/latest/scenario_guides/kubernetes_scenarios/k8s_inventory.html however i keep getting a failed to parse error.
# ansible-inventory --list -i k8s.yaml
[WARNING]: * Failed to parse /etc/ansible/k8s.yaml with ansible_collections.kubernetes.core.plugins.inventory.k8s plugin: Invalid value "kubernetes.core.k8s" for configuration option "plugin_type: inventory
plugin: ansible_collections.kubernetes.core.plugins.inventory.k8s setting: plugin ", valid values are: ['k8s']
[WARNING]: Unable to parse /etc/ansible/k8s.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
extract from ansible.cfg
# egrep -i "\[inventory\]|kubernetes" ansible.cfg
[inventory]
enable_plugins = kubernetes.core.k8s
k8s.yaml
# cat k8s.yaml
plugin: kubernetes.core.k8s
The error suggests that kubernetes.core.k8s is an invalid value and that valid values are ['k8s']. yet this is exactly whats in the documentation, I have tried all manor of altering the plugin name with no success.
Can anyone steer me on what i am missing here?
So I managed to get it working by editing /usr/lib/python3/dist-packages/ansible_collections/kubernetes/core/plugins/inventory/k8s.py it seems my version only listed k8s as a pluggin name, I replaced with, kubernetes.core.k8s and it worked
options:
plugin:
description: token that ensures this is a source file for the 'k8s' plugin.
required: True
choices: ['kubernetes.core.k8s']
I did plan to raise it as a PR on the project but seems this was already updated several months back so I must have just had outdated files.
https://github.com/ansible-collections/kubernetes.core/blob/60933457e81fcfa1000f556b2bc3425bbf080602/plugins/inventory/k8s.py#L27

Terraform Unable to find Helm Release charts

I'm running Kubernetes on GCP and doing changes via Terraform v0.11.14
When running terraform plan I'm getting the error messages here
Error: Error refreshing state: 2 errors occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.cert-manager: 1 error occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.cert-manager: helm_release.cert-manager: error installing: the server could not find the requested resource
* module.cls-xxx-us-central1-a-dev.helm_release.nginx: 1 error occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.nginx: helm_release.nginx: error installing: the server could not find the requested resource
Here's a copy of my helm.tf
resource "helm_release" "nginx" {
depends_on = ["google_container_node_pool.tally-np"]
name = "ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "kube-system"
}
resource "helm_release" "cert-manager" {
depends_on = ["google_container_node_pool.tally-np"]
name = "cert-manager"
chart = "stable/cert-manager"
namespace = "kube-system"
set {
name = "ingressShim.defaultIssuerName"
value = "letsencrypt-production"
}
set {
name = "ingressShim.defaultIssuerKind"
value = "ClusterIssuer"
}
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.cluster_name} --zone ${google_container_cluster.cluster.zone} && kubectl create -f ${path.module}/letsencrypt-prod.yaml"
}
}
I've read that Helm deprecated most of the old chart repos so I tried adding the repositories and installing the charts locally under the namespace kube-system but so far the issue is still persisting.
Here's the list of versions for Terraform and it's providers
Terraform v0.11.14
provider.google v2.17.0
provider.helm v0.10.2
provider.kubernetes v1.9.0
provider.random v2.2.1
As the community is moving towards Helm v3, the maintainers have depreciated the old helm model where we had a single mono repo called stable. The new model is like each product having its own repo. On November 13, 2020 the stable and incubator charts repository reached the end of development and became archives.
The archived charts are now hosted at a new URL. To continue using the archived charts, you will have to make some tweaks in your helm workflow.
Sample workaround:
helm repo add new-stable https://charts.helm.sh/stable
helm fetch new-stable/prometheus-operator

Bazel Kubernetes Object Error: no objects passed to apply (Google Container Registry)

I have a k8s_object rule to apply a deployment to my Google Kubernetes Cluster. Here is my setup:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deployment",
template = ":gateway.deployment.yaml",
kind = "deployment",
cluster = "gke_cents-ideas_europe-west3-b_cents-ideas",
images = {
"gcr.io/cents-ideas/gateway:latest": ":image"
},
)
But when I run bazel run //services/gateway:k8s_deployment.apply, I get the following error
INFO: Analyzed target //services/gateway:k8s_deployment.apply (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //services/gateway:k8s_deployment.apply up-to-date:
bazel-bin/services/gateway/k8s_deployment.apply
INFO: Elapsed time: 0.113s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
$ /snap/bin/kubectl --kubeconfig= --cluster=gke_cents-ideas_europe-west3-b_cents-ideas --context= --user= apply -f -
2020/02/12 14:52:44 Unable to publish images: unable to publish image gcr.io/cents-ideas/gateway:latest
error: no objects passed to apply
error: no objects passed to apply
It doesn't push the new image to the Google Container Registry.
Strangely, this worked a few days ago. But I didn't change anything.
Here is the full code if you need to take a closer look: https://github.com/flolude/cents-ideas/blob/069c773ade88dfa8aff492f024a1ade1f8ed282e/services/gateway/BUILD
Update
I don't know if this has something to do with this issue but when I run
gcloud auth configure-docker
I get some warnings:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: Your config file at [/home/flolu/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
gcloud credential helpers already registered correctly.
I had google-cloud-sdk installed via snap install. What I did to make it work is to remove google-cloud-sdk via
snap remove google-cloud-sdk
and then followed those instructions to install it via
sudo apt install google-cloud-sdk
Now it works fine

install mongo with chef

I've tried to figure out how to install a mongodb 3.4 instance using this chef cookbook. Nevertheless, I've not able to get to install it.
This is my mongodb.rb file content:
node.default['mongodb']['package_version'] = '3.4'
include_recipe 'mongodb::default'
And my metadata.db: depends 'mongodb', '~> 0.16.2'.
I've tried to verify it on centos-72 platform using kitchen verify centos-72. I'm getting this message:
ERROR: yum_package[mongodb-org] (mongodb::install line 77) had an error: Chef::Exceptions::Package: Version ["3.4"] of ["mongodb-org"] not found. Did you specify both version and release? (version-release, e.g. 1.84-10.fc6)
I'm realizing this cookbook tries to add this yum_repository:
yum_repository 'mongodb' do
description 'mongodb RPM Repository'
baseurl "http://downloads-distro.mongodb.org/repo/redhat/os/#{node['kernel']['machine'] =~ /x86_64/ ? 'x86_64' : 'i686'}"
action :create
gpgcheck false
enabled true
end
And according to this mongo documentation the link repository should have to be:
https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
instead of
"http://downloads-distro.mongodb.org/repo/redhat/os/..."
The repo you are using does not have version 3.4 available. You can verify this manually by just looking at the RPMs in http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/