skaffold init is not supporting helm config files - docker-compose

I am trying to generate skaffold config for docker-compose.yml file using the following command,
skaffold init --compose-file docker-compose.yml --verbosity='info'
This fails with "Invalid k8s yaml" error. Please find below the error log.
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/Chart.yaml: decoding kubernetes yaml: Object 'Kind' is missing in 'apiVersion: v1\nappVersion: 0.0.84\ndescription: A Helm chart for Kubernetes\nname: user-interface\nversion: 0.0.84\n'"
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/templates/config-map.yaml: decoding kubernetes yaml: couldn't get version/kind; json parse error: invalid character '{' looking for beginning of object key string"
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/templates/deployment.yaml: decoding kubernetes yaml: couldn't get version/kind; json parse error: invalid character '{' looking for beginning of object key string"
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/templates/ingress.yaml: decoding kubernetes yaml: couldn't get version/kind; json parse error: invalid character '{' looking for beginning of object key string"
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/templates/ksvc.yaml: decoding kubernetes yaml: couldn't get version/kind; json parse error: invalid character '{' looking for beginning of object key string"
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/templates/service.yaml: decoding kubernetes yaml: yaml: line 11: could not find expected ':'"
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/templates/volume.yaml: decoding kubernetes yaml: couldn't get version/kind; json parse error: invalid character '{' looking for beginning of object key string"
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/templates/volumeclaim.yaml: decoding kubernetes yaml: couldn't get version/kind; json parse error: invalid character '{' looking for beginning of object key string"
time="2019-11-26T08:57:19Z" level=info msg="invalid k8s yaml charts/user-interface/values.yaml: decoding kubernetes yaml: Object 'Kind' is missing in '# Default values for helm.
I guess that skaffold is not able to support helm config files and interpret them as normal kubernetes config files.
Is there any option to mention the deploy type (like helm) in the skaffold init command?
I came across the similar issue which has been already resolved.
https://github.com/GoogleContainerTools/skaffold/issues/1726
Please help me to resolve this issue and let me know if I am doing anything wrong.

Related

Kong set Service timeouts via Helm Charts

I am trying to set the Timeouts for my Kong Service, which is deployed with Helm Charts.
In my Service.yaml file I added these annotations, as referenced in the Kong Docs.
annotations:
konghq.com/connect-timeout: 120000
konghq.com/write-timeout: 120000
konghq.com/read-timeout: 120000
However in the deployment Process I get the following error:
> Error: unable to build kubernetes objects from release manifest: unable to decode "": json: cannot unmarshal number into Go struct field ObjectMeta.metadata.annotations of type string
Okay the answer is simply pass the values as strings.
annotations:
konghq.com/connect-timeout: "120000"
konghq.com/write-timeout: "120000"
konghq.com/read-timeout: "120000"

How to create crunchydata postgres cluster using CRD on Terraform?

I created K8S cluster using Terraform and I also created CRD for Crunchydata Postgres Operator
I obtained CRD for Postgres cluster creation from this link
Terraform script looks like below (tailored output)
resource "kubectl_manifest" "pgocluster" {
yaml_body = <<YAML
apiVersion: crunchydata.com/v1
kind: Pgcluster
metadata:
annotations:
current-primary: ${var.pgo_cluster_name}
labels:
crunchy-pgha-scope: ${var.pgo_cluster_name}
deployment-name: ${var.pgo_cluster_name}
name: ${var.pgo_cluster_name}
pg-cluster: ${var.pgo_cluster_name}
pgo-version: 4.6.2
pgouser: admin
name: ${var.pgo_cluster_name}
namespace: ${var.cluster_namespace}
YAML
}
But when I execute 'terraform apply' it errored as
Error: pgo/UserGrp failed to create kubernetes rest client for update of resource: resource [crunchydata.com/v1/Pgcluster] isn't valid for cluster, check the APIVersion and Kind fields are valid
However, according to the official link mentioned above following should work
apiVersion: crunchydata.com/v1
kind: Pgcluster
I am not sure whether it's issue with Terraform or link was not updated correctly
Kindly let me know what should be changed / done to fix this issue as I am stuck with this issue
Finally, I figured out the issue and the issue was pgo_cluster_name was not given in lowercase
I was able to get the following error only when I executed the target individually ie terraform apply --target=<target_name>
Error: pgo/UserGrp failed to run apply: error when creating "/tmp/773985147kubectl_manifest.yaml": Pgcluster.crunchydata.com "UserGrp" is invalid: metadata.name: Invalid value: "UserGrp": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
I set pgo_cluster_name=UsrGrp instead of pgo_cluster_name=usrgrp

kubectl : --dry-run is deprecated and can be replaced with --dry-run=client

I have aws-eks cluster and below is my command to replace existing the configuration.
kubectl create configmap flink-config --from-file=./config -o yaml --dry-run | kubectl replace -
but when I run this command. it gives an error like
W1009 17:00:14.998329 323115 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
Will it do the same thing If I replace -dry-run to -dry-run=client?
About dry-run=client we learn
--dry-run=client flag to preview the object that would be sent to
your cluster, without really submitting it.
And in the kubernetes API reference we read:
Must be "none", "server", or "client". If client strategy, only print
the object that would be sent, without sending it. If server strategy,
submit server-side request without persisting the resource.
Performing local tests I realized that when I try to replace an existing config object using dry-run=server, the following error occurs. The apiserver told us that already exist a configmap with the name flink-config.
kubectl create configmap flink-config --from-file=./config -o yaml --dry-run=server
Error from server (AlreadyExists): configmaps "flink-config" already exists
However is I try with to use dry-run=client the object is not validated by the apiserver, that is, just by the client, so the yaml is printed to us:
kubectl create configmap flink-config --from-file=./config -o yaml --dry-run=client
apiVersion: v1
data:
config: |
FOO: foo
MYVAR: hello
kind: ConfigMap
metadata:
creationTimestamp: null
name: flink-config
So basically, yes, the dry-run=client it has the same effect than the deprecated dry-run. The equivalent flag for dry-run=server was --server-dry-run and became deprecated in v1.18.

inject/pass value to a parameter in kube manifest file

This is a very simple thing I am trying to do. I have the below kube manifest file that creates a namespace.
test-ns.yml
apiVersion: v1
kind: Namespace
metadata:
name: ${NAMESPACE}
all i am trying to do is kubectl apply -f test-ns.yml --namespace=test
This the error I am getting
The Namespace "${NAMESPACE}" is invalid: metadata.name: Invalid value: "${NAMESPACE}": a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')
I am not sure how to pass the NAMESPACE value to the kube manifest file. What am I doing wrong here? and how should I be doing this?

Error while validation yaml File kubernetes

I created the below ns.yaml file:
apiVersion: v1
kind: Namespace
metadata:
Name: testns
I am getting the below error.
error: error validating "ns.yaml": error validating data: ValidationError(Namespace.metadata): unknown field "Name" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
The root cause is clear in the error logs: unknown field "Name" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta;
This means you need to use name instead of Name.
For more info about YAML format of Kubernetes object Namespace metadata, run the following command :
kubectl explain namespace.metadata
And you will get amazing documentation.