I am trying to set up kubemq in my cluster, and this is the command the shown in their README
kubectl apply -f https://deploy.kubemq.io/community
There are a lot of empty field in that yaml and I want to customize it. How can I override an external yaml file applied by kubectl?
In the Kubernetes world, It's generally an unsafe idea to run a command like the below unless you trust the URL:
kubectl apply -f <some/web/url>
This is essentially the same as the following in the non-Kubernetes world:
curl </some/web/url> |bash
In both cases, we aren't inspecting the content downloaded from the URL and directly executing them by feeding to kubectl and bash directly. What if the URL is compromised with some harmful code?
A better approach is to break the single step into parts:
Download the manifest file
Inspect the manifest file.
Run kubectl apply/create on it, only you are satisfied.
Example:
//download the file
curl -fsSL -o downloaded.yml https://deploy.kubemq.io/community
// inspect the file or edit the file
//Now apply the downloaded file after inspecting.
kubectl apply -f downloaded.yml
Why don't you just copy the content in a text file?
Change whatever you want and apply it.
Related
KUBECONFIG="$(find ~/.kube/configs/ -type f -exec printf '%s:' '{}' +)"
This will construct a config file path for the environment var. I can see the contexts of my clusters and I can switch them. However when I want to get my nodes I get
error: You must be logged in to the server (Unauthorized)
How to solve, any ideas?
I suspect you either don't have a current-context set or your current-context points to a non-functioning cluster.
If set (or exported) KUBECONFIG can reference a set of config files.
The files' content will be merged. I think this is what you're attempting.
But then, that variable must be exported for kubectl to use.
Either:
export KUBECONFIG=...
kubectl ...
Or:
KUBECONFIG=... kubectl ...
Then, you can:
# List contexts by NAME
kubectl config get-contexts
# Use one of them by NAME
kubectl config use-context ${NAME}
Ended up with this function:
function change-cluster () {
export KUBECONFIG=~/.kube/configs/"$1"-kubeconfig
kubectl config use-context kubernetes-admin#"$1"
}
The below command will give you the list of kubernetes context exist in your server
kubectl config get-contexts
to switch to the particular context below command can be used
kubectl config use-context <context_name>
To use multiple config file you can create temp merged config file & then use the above commands
I'm trying to make argocd cli output yaml/json to prep it for script ingestion.
According to this PR: https://github.com/argoproj/argo-cd/pull/2551
It should be available but I can't find the option in cli help nor in documentation.
#argocd version:
argocd: v2.1.2+7af9dfb
...
argocd-server: v2.0.3+8d2b13d
Some commands accept the -o json flag to request JSON output.
Look in the commands documentation to find commands which support that flag.
argocd cluster list -o json, for example, will return a JSON list of configured clusters. The documentation looks like this:
Options
-h, --help help for get
-o, --output string
Output format. One of: json|yaml|wide|server (default "yaml")
I'm trying to extract a "sub-map" from a K8s object using kubectl. The result has to be legal JSON syntax so it can be parsed by the Groovy JsonSlurper class.
If I run a command like this:
kubectl get configmap ... -o jsonpath="{.data}"
I get output like this:
map[...
This cannot be parsed by JsonSlurper.
If I instead do this:
kubectl get configmap ... -o json | jq .data
I get something like this:
{
"...": "....",
This should be parseable by JsonSlurper.
You might first say, "well, why don't you just DO that then?". I could, but I'm in a situation where I have to limit what applications I assume are installed in the environment. I'm not certain that "jq" is available on all of our build nodes (they aren't running in a container yet).
Is there some way that I can make kubectl with jsonpath output emits a value in legal JSON syntax?
I'm currently using kubectl v1.14.0, with v1.13.5 on the server.
I am trying to create/apply this kubectl .yaml file https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/deployment.yaml via Terrraform`s null_resource to the AKS to install Azure AD Pod Identity. It needed to deploy Azure Gateway Ingress Controller.
Using Windows 10 with VS Code
main.tf:
data "template_file" "aad_pod" {
template = "${file("${path.module}/templates/aad_pod.yaml")}"
}
resource "null_resource" "aad_pod_deploy" {
triggers = {
manifest_sha1 = "${sha1("${data.template_file.aad_pod.rendered}")}"
}
provisioner "local-exec" {
command = "kubectl apply -f -<<EOF\n${data.template_file.aad_pod.rendered}\nEOF"
}
}
After terraform apply I have this error:
Error: Error running command 'kubectl apply -f -<<EOF
'cutted listing of yaml file'
EOF': exit status 1. Output: << was unexpected at this time.
Any help will be appreciated
Because of differences between Unix-like operating systems and Windows, it's rarely possible to use local-exec in a portable way unless your use-case is very simple. This is one of the reasons why provisioners are a last resort.
I think the most portable answer would be to use the official Kubernetes provider to interact with Kubernetes here. Alternatively, if using kubectl's input format in particular is important for what you are doing, you could use a community-maintained kubectl provider.
resource "kubectl_manifest" "example" {
yaml_body = data.template_file.aad_pod.rendered
}
If you have a strong reason to use the local-exec provisioner rather than a native Terraform provider, you'll need to find a way to write a command that can be interpreted in a compatible way by both a Unix-style shell and by Windows's command line conventions. I expect it would be easier to achieve that by writing the file out to disk first and passing the filename to kubectl, because that avoids the need to use any special features of the shell and lets everything be handled by kubectl itself:
resource "local_file" "aad_pod_deploy" {
filename = "${path.module}/aad_pod.yaml"
content = data.template_file.aad_pod.rendered
provisioner "local-exec" {
command = "kubectl apply -f ${self.filename}"
}
}
There are still some caveats to watch out for with this approach. For example, if you run Terraform under a directory path containing spaces then self.filename will contain spaces and therefore probably won't be parsed as you want by the Unix shell or by the kubectl Windows executable.
Thank you for the comments. I found the solution. I am using helm_release resource in terraform. Just create your helm chart with necessary template and use it with helm_release.
I made a helm chart for AAD Identity and it make the job
Currently, I have a number of Kubernetes manifest files which define service's or deployment's. When I do an kubectl apply I need to include -all- the files which have changes and need to be applied.
Is there a way to have a main manifest file which references all the other files so when I do kubectl apply i just have to include the main manifest file and don't have to worry manually adding each file that has changed, etc.
Is this possible?
I did think of making an alias or batch file or bash file that has the apply command and -all- the files listed .. but curious if there's a 'kubernetes' way ....
You may have a directory with manifests and do the following:
kubectl apply -R -f manifests/
In this case kubectl will recursively traverse the directory and apply all manifests that it finds.