How can I provision Kubernetes with kops using multi document yaml file? - kubernetes

I've got a multi document yaml file, how can I provision a Kubernetes cluster from this?
Using kops create -f only seems to import the cluster definition, and does not import the instance groups etc which are defined in additional 'documents' with the yaml:
...
masters: private
nodes: private
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
...

According to kops source code, multiple section in one file is supported.
Perhaps you need to check your file for mistakes.
Here is the part responsible for parsing file and creating resources. To make code shorter, I've deleted all the error checks.
For investigation, please see original file:
func RunCreate(f *util.Factory, out io.Writer, c *CreateOptions) error {
clientset, err := f.Clientset()
// Codecs provides access to encoding and decoding for the scheme
codecs := kopscodecs.Codecs //serializer.NewCodecFactory(scheme)
codec := codecs.UniversalDecoder(kopsapi.SchemeGroupVersion)
var clusterName = ""
//var cSpec = false
var sb bytes.Buffer
fmt.Fprintf(&sb, "\n")
for _, f := range c.Filenames {
contents, err := vfs.Context.ReadFile(f)
// TODO: this does not support a JSON array
sections := bytes.Split(contents, []byte("\n---\n"))
for _, section := range sections {
defaults := &schema.GroupVersionKind{
Group: v1alpha1.SchemeGroupVersion.Group,
Version: v1alpha1.SchemeGroupVersion.Version,
}
o, gvk, err := codec.Decode(section, defaults, nil)
switch v := o.(type) {
case *kopsapi.Cluster:
// Adding a PerformAssignments() call here as the user might be trying to use
// the new `-f` feature, with an old cluster definition.
err = cloudup.PerformAssignments(v)
_, err = clientset.CreateCluster(v)
case *kopsapi.InstanceGroup:
clusterName = v.ObjectMeta.Labels[kopsapi.LabelClusterName]
cluster, err := clientset.GetCluster(clusterName)
_, err = clientset.InstanceGroupsFor(cluster).Create(v)
case *kopsapi.SSHCredential:
clusterName = v.ObjectMeta.Labels[kopsapi.LabelClusterName]
cluster, err := clientset.GetCluster(clusterName)
sshCredentialStore, err := clientset.SSHCredentialStore(cluster)
sshKeyArr := []byte(v.Spec.PublicKey)
err = sshCredentialStore.AddSSHPublicKey("admin", sshKeyArr)
default:
glog.V(2).Infof("Type of object was %T", v)
return fmt.Errorf("Unhandled kind %q in %s", gvk, f)
}
}
}
{
// If there is a value in this sb, this should mean that we have something to deploy
// so let's advise the user how to engage the cloud provider and deploy
if sb.String() != "" {
fmt.Fprintf(&sb, "\n")
fmt.Fprintf(&sb, "To deploy these resources, run: kops update cluster %s --yes\n", clusterName)
fmt.Fprintf(&sb, "\n")
}
_, err := out.Write(sb.Bytes())
}
return nil
}

This may not have been possible in the past, but it's possible now.
Here's how I'm currently IaC'ing my kops cluster.
I have 2 files:
devkube.mycompany.com.cluster.kops.yaml
instancegroups.kops.yaml
The 2nd file is a multi-document yaml file like you're requesting.
(with multiple instance.yaml's inside of it separated by ---).
Note: I've successfully been able to combine both of these .yaml files into a single multi-document .yaml file and have it work fine, I just do it this way so I can reuse the instancegroups.yaml on multiple clusters / keep the git repo dry.
I generated the files by using:
Bash# kops get cluster --name devkube.mycompany.com -o yaml > devkube.mycompany.com.cluster.kops.yaml
Bash# kops get ig --name devkube.mycompany.com -o yaml > instancegroups.kops.yaml
Here's how I use these files to provision a kops cluster from git:
export AWS_PROFILE=devkube-kops
export KOPS_STATE_STORE=s3://kops-state-devkube.mycompany.com
clustername="devkube.mycompany.com"
kops replace --name $clustername -f ../kops/qak8s.vibrenthealth.com.cluster.yaml --force
kops replace --name $clustername -f ../kops/instancegroups.yaml --force
kops create secret --name $clustername sshpublickey admin -i ~/.ssh/id_rsa.pub #so the person who provisions the cluster can ssh into the nodes
kops update cluster --name $clustername --yes

Related

Using K8S API to access pod [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
With kubectl we can run the following command
kubectl exec -ti POD_NAME -- pwd
Can I do that from API level? I checked the POD API and seems it is missing there https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/
What I am looking for, is a UI tool to view the files in POD without extra dependency
UPDATE:
I found the following code to exec command in pod
package main
import (
"bytes"
"context"
"flag"
"fmt"
"path/filepath"
corev1 "k8s.io/api/core/v1"
_ "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/remotecommand"
"k8s.io/client-go/util/homedir"
//
// Uncomment to load all auth plugins
// _ "k8s.io/client-go/plugin/pkg/client/auth"
//
// Or uncomment to load specific auth plugins
// _ "k8s.io/client-go/plugin/pkg/client/auth/azure"
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
// _ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
namespace := "stage"
pods, err := clientset.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
podName := "ubs-job-qa-0"
containerName := "ubs-job"
// https://github.com/kubernetes/kubernetes/blob/release-1.22/test/e2e/framework/exec_util.go
// https://zhimin-wen.medium.com/programing-exec-into-a-pod-5f2a70bd93bb
req := clientset.CoreV1().
RESTClient().
Post().
Resource("pods").
Name(podName).
Namespace(namespace).
SubResource("exec").
Param("container", containerName)
scheme := runtime.NewScheme()
if err := corev1.AddToScheme(scheme); err != nil {
panic("Cannot add scheme")
}
parameterCodec := runtime.NewParameterCodec(scheme)
req.VersionedParams(&corev1.PodExecOptions{
Stdin: false,
Stdout: true,
Stderr: true,
TTY: true,
Container: podName,
Command: []string{"ls", "-la", "--time-style=iso", "."},
}, parameterCodec)
exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
if err != nil {
panic(err)
}
var stdout, stderr bytes.Buffer
err = exec.Stream(remotecommand.StreamOptions{
Stdin: nil,
Stdout: &stdout,
Stderr: &stderr,
})
if err != nil {
panic(err)
}
text := string(stdout.Bytes())
fmt.Println(text)
}
In your case, use of kubectl is the same as calling the api-server; which in turn call the kubelet on the node and exec your command in the pod namespace.
You can experiment like this:
kubectl proxy --port=8080 &
curl "localhost:8080/api/v1/namespaces/<namespace>/pods/<pod>/exec?command=pwd&stdin=false"
To copy file you can use: kubectl cp --help
You can leverage tools like Kubernetes Dashboard as UI tool or if you wanna go enterprise level, try Rancher

terraform plan recreates resources on every run with terraform cloud backend

I am running into an issue where terraform plan recreates resources that don't need to be recreated every run. This is an issue because some of the steps depend on those resources being available, and since they are recreated with each run, the script fails to complete.
My setup is Github Actions, Linode LKE, Terraform Cloud.
My main.tf file looks like this:
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
provider "linode" {
}
provider "helm" {
debug = true
kubernetes {
config_path = "${local_file.kubeconfig.filename}"
}
}
resource "linode_lke_cluster" "lke_cluster" {
label = "MY-LABEL-HERE"
k8s_version = "1.21"
region = "us-central"
pool {
type = "g6-standard-2"
count = 3
}
}
and my outputs.tf file
resource "local_file" "kubeconfig" {
depends_on = [linode_lke_cluster.lke_cluster]
filename = "kube-config"
# filename = "${path.cwd}/kubeconfig"
content = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}
resource "helm_release" "ingress-nginx" {
# depends_on = [local_file.kubeconfig]
depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
name = "ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
}
resource "null_resource" "custom" {
depends_on = [helm_release.ingress-nginx]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# apply changes
provisioner "local-exec" {
command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
}
}
In Github Actions, I'm running these steps:
jobs:
init-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: List terraform state
run: terraform state list
- name: Terraform Plan
run: terraform plan
id: plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
When I look at the results of terraform state list I can see my resources:
Run terraform state list
terraform state list
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom
But my terraform plan fails and the issue seems to stem from the fact that those resources try to get recreated.
Run terraform plan
terraform plan
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
Waiting for the plan to start...
Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│
│ with helm_release.ingress-nginx,
│ on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
│ 8: resource "helm_release" "ingress-nginx" {
Is there a way to tell terraform it doesn't need to recreate those resources?
Regarding the actual error shown, Error: Kubernetes cluster unreachable: stat kibe-config: no such file or directory... which is referencing your outputs file... I found this which could help with your specific error: https://github.com/hashicorp/terraform-provider-helm/issues/418
1 other thing looks strange to me. Why does your outputs.tf refer to 'resources' & not 'outputs'. Shouldn't your outputs.tf look like this?
output "local_file_kubeconfig" {
value = "reference.to.resource"
}
Also I see your state file / backend config looks like it's properly configured.
I recommend logging into your terraform cloud account to verify that the workspace is indeed there, as expected. It's the state file that tells terraform not to re-create the resources it manages.
If the resources are already there and terraform is trying to re-create them, that could indicate that those resources were created prior to using terraform or possibly within another terraform cloud workspace or plan.
Did you end up renaming your backend workspace at any point with this plan? I'm referring to your main.tf file, this part where it says MY-WORKSPACE-HERE :
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
Unfortunately I am not a kurbenetes expert, so possibly more help can be used there.

How to restart a deployment in kubernetes using go-client

Is there a way to restart a kubernetes deployment using go-client .I have no idea how to achieve this ,help me!
If you run kubectl rollout restart deployment/my-deploy -v=10 you see that kubectl actually sends a PATCH request to the APIServer and sets the .spec.template.metadata.annotations with something like this:
kubectl.kubernetes.io/restartedAt: '2022-11-29T16:33:08+03:30'
So, you can do this with the client-go:
clientset, err :=kubernetes.NewForConfig(config)
if err != nil {
// Do something with err
}
deploymentsClient := clientset.AppsV1().Deployments(namespace)
data := fmt.Sprintf(`{"spec": {"template": {"metadata": {"annotations": {"kubectl.kubernetes.io/restartedAt": "%s"}}}}}`, time.Now().Format("20060102150405"))
deployment, err := deploymentsClient.Patch(ctx, deployment_name, k8stypes.StrategicMergePatchType, []byte(data), v1.PatchOptions{})
if err != nil {
// Do something with err
}

Cannot connect to AWS RDS PostgreSQL with Golang using Helm Charts

I am unable to connect to AWS RDS PostgeSQL when running Helm Chart for a Go App (GORM). All credentials are stored in a kubernetes secret, and the secret is being used in the helm chart.
Few Points:
Able to connect locally just fine.
The PostgreSQL database is already created in RDS, and made sure that the kubernetes secret as matches with the same creds.
Docker image is pushed and pulled from Gitlab without any errors.
Command "helm ls" displays the deployment status as "DEPLOYED"
When taking "kubectl get pod", I get STATUS as "CrashLoopBackoff"
When taking "kubectl describe pod", I get back MESSAGE "Back-off restarting failed container"
I then take "kubectl logs pod_name" to track the error, and get back the following:
failed to connect to the database
dial tcp 127.0.0.1:5432: connect: connection refused
(Not sure why it is still specifying "127.0.0.1" when i have the secret mounted)
Unable to exec into the pod because it is not running.
I have tried:
Secure a connection in the same cluster from a different pod using psql to ensure that the creds in the secret are in sync with what was set up in RDS PostgreSQL
Changing the api from DB_HOST, to host=%s
Tried connecting using fmt.Sprintf, as well as os.Getenv
"Versions"
GO version:
go1.11.1 darwin/amd64
DOCKER version:
Client:
Version: 18.06.1-ce
API version: 1.38
API.GO (file)
package controllers
import (
"fmt"
"log"
"net/http"
"os"
"time"
"github.com/gorilla/mux"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/postgres"
_ "gitlab.torq.trans.apps.ge.com/503081542/torq-auth-api/models"
)
var err error
type API struct {
Database *gorm.DB
Router *mux.Router
}
func (api *API) Initialize(opts string) {
// Initialize DB
dbinfo := os.Getenv("DB_HOST, DB_USER, DB_PASSWORD, DB_NAME,
DB_PORT sslmode=disable")
// dbinfo := os.Getenv("host=%s user=%s password=%s dbname=%s port=%s sslmode=disable")
api.Database, err = gorm.Open("postgres", dbinfo)
if err != nil {
log.Print("failed to connect to the database")
log.Fatal(err)
}
// Bind to a port and pass our router in
// log.Fatal(http.ListenAndServe(":8000", handlers.CORS() .
(api.Router)))
fmt.Println("Connection established")
log.Printf("Postgres started at %s PORT", config.DB_PORT)
// MODELS
type application struct {
ID string `json:"id" gorm:"primary_key"`
CreatedAt time.Time `json:"-"`
UpdatedAt time.Time `json:"-"`
Name string `json:"name"`
Ci string `json:"ci"`
}
type Cloud struct {
ID string `json:"id" gorm:"primary_key"`
Name string `json:"name"`
}
fmt.Println("Tables are created")
// Enable this line for DB logging
api.Database.LogMode(true)}
// Initialize Router
api.Router = mux.NewRouter()
api.Router.HandleFunc("/api/v1/applications",
api.HandleApplications)
api.Router.HandleFunc("/api/v1/application/{id}",
api.HandleApplication)
api.Router.HandleFunc("/api/v1/clusters", api.handleClusters)
}
I am not exactly sure where the issue could be here, this is a learning experience for myself. Any ideas would be appreciated.
Thanks in advance!
I think your database initialization code is not doing what you want. Try something like this
var driver = "postgres"
var name = os.Getenv("DB_NAME")
var host = os.Getenv("DB_HOST")
var user = os.Getenv("DB_USER")
var pass = os.Getenv("DB_PASSWORD")
var conn = fmt.Sprintf("host=%v user=%v password=%v dbname=%v sslmode=disable", host, user, pass, name)
api.Database, err := gorm.Open(driver, conn)

Error when creating container with golang docker engine

I am creating a project in Go and I am using both "github.com/docker/docker/client" and "github.com/docker/docker/api/types", but when I try and create a container I get the following error:
ERROR: 2016/10/03 22:39:26 containers.go:84: error during connect: Post https://%2Fvar%2Frun%2Fdocker.sock/v1.23/containers/create: http: server gave HTTP response to HTTPS client
I can't understand why this is happening and it only happened after using the new golang docker engine(the old "github.com/docker/engine-api" is now deprecated).
The code isn't anything complicated, so I wonder if I am missing something:
resp, err := cli.Pcli.ContainerCreate(context.Background(), initConfig(), nil, nil, "")
if err != nil {
return err
}
And the initConfig that is called does the following:
func initConfig() (config *container.Config) {
mount := map[string]struct{}{"/root/host": {}}
return &container.Config{Image: "leadis_image", Volumes: mount, Cmd: strslice.StrSlice{"/root/server.py"}, AttachStdout: true}}
Also here is my dockerfile
FROM debian
MAINTAINER Leadis Journey
LABEL Description="This docker image is used to compile and execute user's program."
LABEL Version="0.1"
VOLUME /root/host/
RUN apt-get update && yes | apt-get upgrade
RUN yes | apt-get install gcc g++ python3 make
COPY container.py /root/server.py
EDIT
Just tried to test it with a simpler program
package main
import (
"fmt"
"os"
"io/ioutil"
"github.com/docker/docker/client"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/strslice"
"golang.org/x/net/context"
)
func initConfig() (config *container.Config) {
mount := map[string]struct{}{"/root/host": {}}
return &container.Config{Image: "leadis_image", Volumes: mount, Cmd: strslice.StrSlice{"/root/server.py"}, AttachStdout: true}
}
func main() {
client, _ := client.NewEnvClient()
cwd, _ := os.Getwd()
ctx, err := os.Open(cwd+"/Dockerfile.tar.gz")
if err != nil {
fmt.Println(err)
return
}
build, err := client.ImageBuild(context.Background(), ctx, types.ImageBuildOptions{Tags: []string{"leadis_image"}, Context: ctx, SuppressOutput: false})
if err != nil {
fmt.Println(err)
return
}
b, _ := ioutil.ReadAll(build.Body)
fmt.Println(string(b))
_, err = client.ContainerCreate(context.Background(), initConfig(), nil, nil, "")
if err != nil {
fmt.Println(err)
}
}
Same dockerfile, but I still get the same error:
error during connect: Post
https://%2Fvar%2Frun%2Fdocker.sock/v1.23/containers/create: http:
server gave HTTP response to HTTPS client
client.NewEnvClient()
Last time I tried, this API expects environment variables like DOCKER_HOST in a different syntax from than the normal docker client.
From the client.go:
// NewEnvClient initializes a new API client based on environment variables.
// Use DOCKER_HOST to set the url to the docker server.
// Use DOCKER_API_VERSION to set the version of the API to reach, leave empty for latest.
// Use DOCKER_CERT_PATH to load the TLS certificates from.
// Use DOCKER_TLS_VERIFY to enable or disable TLS verification, off by default.
To use this, you need DOCKER_HOST to be set/exported in one of these formats:
unix:///var/run/docker.sock
http://localhost:2375
https://localhost:2376