ansible giving error whereas in k8s command runs fine on server - kubernetes

Does anyone face this issue. there is no other way if a secret is create it to ignore already exist error, below is the command which runs fine with out of secret/license configured as compared to secret/license created (which happens first time)
kubectl create secret generic license --save-config --dry-run=true --from-file=/tmp/ansibleworkspace/license -n {{ appNameSpace }} -o yaml | kubectl apply -f -
It runs fine if I run it on a k8s cluster
Below is the error while executing it through ansible.
Error: unknown shorthand flag: 'f' in -f
Examples:
# Create a new secret named my-secret with keys for each file in folder bar
kubectl create secret generic my-secret --from-file=path/to/bar
# Create a new secret named my-secret with specified keys instead of names on disk
kubectl create secret generic my-secret --from-file=ssh-privatekey=~/.ssh/id_rsa --from-file=ssh-publickey=~/.ssh/id_rsa.pub
# Create a new secret named my-secret with key1=supersecret and key2=topsecret
kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret
# Create a new secret named my-secret using a combination of a file and a literal
kubectl create secret generic my-secret --from-file=ssh-privatekey=~/.ssh/id_rsa --from-literal=passphrase=topsecret
# Create a new secret named my-secret from an env file
kubectl create secret generic my-secret --from-env-file=path/to/bar.env
Options:
--allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--append-hash=false: Append a hash of the secret to its name.
--dry-run=false: If true, only print the object that would be sent, without sending it.
--from-env-file='': Specify the path to a file to read lines of key=val pairs to create a secret (i.e. a Docker .env file).
--from-file=[]: Key files can be specified using their file path, in which case a default name will be given to them, or optionally with a name and file path, in which case the given name will be used. Specifying a directory will iterate each named file in the directory that is a valid secret key.
--from-literal=[]: Specify a key and literal value to insert in secret (i.e. mykey=somevalue)
--generator='secret/v1': The name of the API generator to use.
-o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
--save-config=false: If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.
--template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
--type='': The type of secret to create
--validate=true: If true, use a schema to validate the input before sending it
Usage:
kubectl create secret generic NAME [--type=string] [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run] [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
unknown shorthand flag: 'f' in -f

Since you haven't given any details about the context you are running the command, I can only provide an answer based on my guess.
Explanation:
I suppose that you use the command module in your Ansible playbook and this is the cause of your issue. As you can read in module description:
The given command will be executed on all selected nodes. It will not be processed through the shell, so variables like $HOME and
operations like "<", ">", "|", ";" and "&" will not
work (use the shell module if you need these features).
and in your command you use "|" character, which cannot be interpreted properly as it is not processed through the shell. Note that the error you get:
Error: unknown shorthand flag: 'f' in -f
is related with incorrect use of kubectl create secret generic which simply doesn't have such option. Since "|" character is not interpreted by the command module, the proceeding command:
kubectl apply -f -
is treated as a part of:
kubectl create secret generic
(which is confirmed by the error you get, followed by the correct usage examples).
Solution:
As recommended in the above quoted docs, use the shell module instead:
If you want to run a command through the shell (say you are using <,
>, |, etc), you actually want the shell module instead.

Related

How to override external yaml applied to K8S?

I am trying to set up kubemq in my cluster, and this is the command the shown in their README
kubectl apply -f https://deploy.kubemq.io/community
There are a lot of empty field in that yaml and I want to customize it. How can I override an external yaml file applied by kubectl?
In the Kubernetes world, It's generally an unsafe idea to run a command like the below unless you trust the URL:
kubectl apply -f <some/web/url>
This is essentially the same as the following in the non-Kubernetes world:
curl </some/web/url> |bash
In both cases, we aren't inspecting the content downloaded from the URL and directly executing them by feeding to kubectl and bash directly. What if the URL is compromised with some harmful code?
A better approach is to break the single step into parts:
Download the manifest file
Inspect the manifest file.
Run kubectl apply/create on it, only you are satisfied.
Example:
//download the file
curl -fsSL -o downloaded.yml https://deploy.kubemq.io/community
// inspect the file or edit the file
//Now apply the downloaded file after inspecting.
kubectl apply -f downloaded.yml
Why don't you just copy the content in a text file?
Change whatever you want and apply it.

How to switch Kubernetes contexts with multiple config files?

KUBECONFIG="$(find ~/.kube/configs/ -type f -exec printf '%s:' '{}' +)"
This will construct a config file path for the environment var. I can see the contexts of my clusters and I can switch them. However when I want to get my nodes I get
error: You must be logged in to the server (Unauthorized)
How to solve, any ideas?
I suspect you either don't have a current-context set or your current-context points to a non-functioning cluster.
If set (or exported) KUBECONFIG can reference a set of config files.
The files' content will be merged. I think this is what you're attempting.
But then, that variable must be exported for kubectl to use.
Either:
export KUBECONFIG=...
kubectl ...
Or:
KUBECONFIG=... kubectl ...
Then, you can:
# List contexts by NAME
kubectl config get-contexts
# Use one of them by NAME
kubectl config use-context ${NAME}
Ended up with this function:
function change-cluster () {
export KUBECONFIG=~/.kube/configs/"$1"-kubeconfig
kubectl config use-context kubernetes-admin#"$1"
}
The below command will give you the list of kubernetes context exist in your server
kubectl config get-contexts
to switch to the particular context below command can be used
kubectl config use-context <context_name>
To use multiple config file you can create temp merged config file & then use the above commands

docker swam - secrets from file not resolving tilde

Using secrets from docker-compose on my dev machine works. But the remote server via ssh just says open /home/johndoe/~/my-secrets/jenkinsuser.txt: no such file or directory.
secret definition in stack.yml:
secrets:
jenkinsuser:
file: ~/my-secrets/jenkinsuser.txt
Run with:
docker stack deploy -c stack.yml mystack
The documentation does not mention any gotchas about ~ path. I am not going to put the secret files inside . as all examples do, because that directory is version controlled.
Am I missing some basics about variable expansion, or differences between docker-compose and docker swarm?
Your ~ character in your path is considered as literal. Use $HOME wich is treated as a variable in your string path.
Tilde character work only if it is unquoted. In your remote environment, the SWARM yaml parser consider your path as a string, where the prefix-tilde is read as a normal character (see prefix-tilde).

Terraform Kubernetes provisioner "local-exec" kubectl apply -f -<<EOF on Windows not working

I am trying to create/apply this kubectl .yaml file https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/deployment.yaml via Terrraform`s null_resource to the AKS to install Azure AD Pod Identity. It needed to deploy Azure Gateway Ingress Controller.
Using Windows 10 with VS Code
main.tf:
data "template_file" "aad_pod" {
template = "${file("${path.module}/templates/aad_pod.yaml")}"
}
resource "null_resource" "aad_pod_deploy" {
triggers = {
manifest_sha1 = "${sha1("${data.template_file.aad_pod.rendered}")}"
}
provisioner "local-exec" {
command = "kubectl apply -f -<<EOF\n${data.template_file.aad_pod.rendered}\nEOF"
}
}
After terraform apply I have this error:
Error: Error running command 'kubectl apply -f -<<EOF
'cutted listing of yaml file'
EOF': exit status 1. Output: << was unexpected at this time.
Any help will be appreciated
Because of differences between Unix-like operating systems and Windows, it's rarely possible to use local-exec in a portable way unless your use-case is very simple. This is one of the reasons why provisioners are a last resort.
I think the most portable answer would be to use the official Kubernetes provider to interact with Kubernetes here. Alternatively, if using kubectl's input format in particular is important for what you are doing, you could use a community-maintained kubectl provider.
resource "kubectl_manifest" "example" {
yaml_body = data.template_file.aad_pod.rendered
}
If you have a strong reason to use the local-exec provisioner rather than a native Terraform provider, you'll need to find a way to write a command that can be interpreted in a compatible way by both a Unix-style shell and by Windows's command line conventions. I expect it would be easier to achieve that by writing the file out to disk first and passing the filename to kubectl, because that avoids the need to use any special features of the shell and lets everything be handled by kubectl itself:
resource "local_file" "aad_pod_deploy" {
filename = "${path.module}/aad_pod.yaml"
content = data.template_file.aad_pod.rendered
provisioner "local-exec" {
command = "kubectl apply -f ${self.filename}"
}
}
There are still some caveats to watch out for with this approach. For example, if you run Terraform under a directory path containing spaces then self.filename will contain spaces and therefore probably won't be parsed as you want by the Unix shell or by the kubectl Windows executable.
Thank you for the comments. I found the solution. I am using helm_release resource in terraform. Just create your helm chart with necessary template and use it with helm_release.
I made a helm chart for AAD Identity and it make the job

How to refer to variables in a Kubernetes deployment file?

Sometimes there are variables in the deployment yaml file which are not pre-specified and will be known only during deployment (For example name and tag for the image of a container).
Normally we put a marker text (e.g. {{IMAGE_NAME}}) in the yaml file and use a bash text manipulation tools to change it with actual value in the deployment file.
Is there a way to use environment variables or other methods (like using arguments when running kubectl create) instead of text-replace tools?
What I've done is use envvars in the deployment configuration, then run apply/create with the output from an envsubst command:
deployment.yaml file:
[...]
spec:
replicas: $REPLICA_COUNT
revisionHistoryLimit: $HISTORY_LIM
[...]
during deploy:
$ export REPLICA_COUNT=10 HISTORY_LIM=10
$ envsubst < deployment.yaml | kubectl apply -f -
Unfortunately there is no way to use environment variables directly with kubectl. The common solution is to use some kind of a templating language + processing as you suggested.