Rancher Cluster `cloud_provider` YAML: How to use Active Directory Domain Credentials (i.e. user name with back slash) in YAML? - kubernetes

I'm attempting to put my AD domain credentials into the YAML config file created by rancher so that I may use vSphere storage within Rancher / Kubernetes, however, I'm running into an issue with the formatting of the virtual_center config portion:
(...)
virtual_center:
<IP>:
(...)
user: "DOMAIN/username"
password: <PASSWORD>
The cluster doesn't seem to like a backslash (or two backslashes including the escape character), and it also doesn't seem to like a forward slash.
How should I enter my domain credentials in here?
EDIT
nvm, i figured it out.
JK, answer below.

Apparently the solution is to use a user#domain.site.local format rather than a DOMAIN\user format.
See:
https://github.com/rancher/rancher/issues/16371

Related

In docker-compose, what is the effect/purpose of `dns-search: .`

I am looking at the stackstorm docker-compose file, and within it almost all containers have a line dns_search: . According to docker-compose documentation, dns_search is for the purpose of configuring search domains.
I am used to seeing this in context of transparently adding a domain to unqualified short domains. For example if I add dns_search: mydomain.com, I would expect "host1" to transparently resolve as "host1.mydomain.com".
I have never seen this set as a single dot . before. What is the effect/purpose of doing this configuration?
I'm posting the answer from the Stackstorm Git project issue see comment/"dns_search: .". Paraphrasing: it was useful in old versions of Docker before 2017, before the ndots configuration was available. Nowadays that configuration has no impact, and in fact has been removed from the stackstorm docker-compose file.
I believe this is because all domain names end in . under the hood, but browsers and other software abstracts this out.
For example. under the hood www.google.com is actually www.google.com.
So, in the docker-compose file, this would essentially be saying "Find me any domain"
A bit more detail on why there's an extra dot, if you're interested:
Domain name resolution is heirachical, reading right to left, with each block, separated by a ., being a step in the process. A DNS resolver will first find a source of ., which will be able to return the address for a resolver for the next block, until it reaches the final block, where it returns the full DNS record.
Extending EdwardTeach's answer:
#ytjohn effectively said they did in the past because putting dns_search: . configures the DNS search domains to be only . instead of inheriting the host ones. I can't confirm that because I didn't test it.
Now, I tested what docker-compose does today, and in a container, cat /etc/resolve.conf returns:
nameserver 127.0.0.11
options ndots:0
Where options ndots:0 is (from resolv.conf docs):
ndots:n
Sets a threshold for the number of dots which must
appear in a name given to res_query(3) (see
resolver(3)) before an initial absolute query will
be made. The default for n is 1, meaning that if
there are any dots in a name, the name will be
tried first as an absolute name before any search
list elements are appended to it. The value for
this option is silently capped to 15.
With ndots:0, all domains will be attempted using the absolute name first, only then using the search list.
How I came to this conclusion
The Github comment:
If you don't set this dns_search: ., then whatever the host has in search in their /etc/resolv.conf will get put into your container's /etc/resolv.conf.
This doesn't happen. My host has search domain[0]: broadband (macOS command: scutil --dns), and in docker containers, it doesn't show broadband (linux command: cat /etc/resolv.conf). Instead, it says options ndots:0
dns_search docs:
dns defines custom DNS search domains to set on container network interface configuration. Can be a single value or a list.
What is a DNS search domain?
It is the DNS service used to resolve hostnames that are not fully qualified, e.g. hostname will try hostname.example.com then hostname.website.com if your search domains list was example.com, website.com. More information on https://superuser.com/a/184366
In another repo (crossdock), their dockerfile had the comment:
`dns_search: . # Ensures unified DNS config.`

How to setup a Kubernetes secret using GoDaddy certs for Nginx Ingress controller

I'm struggling to setup a kubernetes secret using GoDaddy certs in order to use it with the Ingress Nginx controller in a Kubernetes cluster.
I know that GoDaddy isn't the go-to place for that but that's not on my hands...
Here what I tried (mainly based on this github post):
I have a mail from GoDaddy with two files: generated-csr.txt and generated-private-key.txt.
Then I downloaded the cert package on GoDaddy's website (I took the "Apache" server type, as it's the recommended on for Nginx). The archive contains three files (with generated names): xxxx.crt and xxxx.pem (same content for both files, they represent the domain cert) and gd_bundle-g2-g1.crt (which is the intermediate cert).
Then I proceed to concat the domain cert and the intermediate cert (let's name it chain.crt) and tried to create a secret using these file with the following command:
kubectl create secret tls organization-tls --key generated-private-key.txt --cert chain.crt
And my struggle starts here, as it throw this error:
error: tls: failed to find any PEM data in key input
How can I fix this, or what I'm missing?
Sorry to bother with something trivial like this, but it's been two days and I'm really struggling to find proper documentation or example that works for the Ingress Nginx use case...
Any help or hint is really welcome, thanks a lot to you!
This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.
As OP mentioned in comments, the issue was solved by adding a new line in the beginning of the file.
"The key wasn't format correctly as it was lacking a newline in the
beginning of the file. So this particular problem is now resolved."
Similar issue was also addressed in this answer.
The issue is tricky but easy to fix.
The private key file given by GoDaddy is not properly encoded: it is encoded in UTF8 with BOM, so it starts with a byte that shouldn't be there. It is not understood by nginx ingress when ingesting the private key, and leads to the error.
The simple fix is to run the following command to properly encode the private key file:
iconv -c -f UTF8 -t ASCII generated-private-key.txt > generated-private-key-anssi.txt
And then you get the base64 private key as usual:
cat generated-private-key-anssi.txt | base64 -w 0
Now, the ingress properly gets the ssl certificate. In case you need to see the logs of the ingress to see how the CERT is processed, just list pods & logs in the ingress-nginx namespace.

Creating Kubernetes Endpoint in VSTS generates error

What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed.
Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint?
This is followed by TsuyoshiUshio/KubernetesTask issue 35
I try to reproduce, however, I can't do it.
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download task/kubeconfig.
Workaround might be like this:
kubectl version in your local machine and check the current server/client version
specify the same version as the server on the download task. (by default it is 1.5.2)
See the log of your release pipeline which is fail, you can see which kubectl command has been executed, do the same thing on your local machine with fitting your local pc's environment.
The point is, before go to the VSTS, download the kubectl by yourself.
Then, put the kubeconfg on the default folder like ~/.kube/config or set environment variables KUBECONFIG to the binary.
Then execute kubectl get nodes and make sure if it works.
My kubeconfig is different format with yours. If you use AKS, az aks install-cli command and az aks get-credentials command.
Please refer https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough .
If it works locally, the config file must work on the VSTS task environment. (or this task or VSTS has a bug)
I had the same problem on VSTS.
Here is my workaround to get a Service Connection working (in my case to GCloud):
Switched Authentication to "Service Account"
Run the two commands told by the info icon next to the fields Token and Certificate: "Token to authenticate against Kubernetes.
Use the ‘kubectl get serviceaccounts -o yaml’ and ‘kubectl get secret
-o yaml’ commands to get the token."
kubectl get secret -o yaml > kubectl-secret.yaml
Search inside the the file kubectl-secret.yaml the values ca.crt and token
Enter the values inside VSTS to the required fields
The generated config I was using had a duplicate line, removing this corrected the issue for me.
users:
- name: cluster_stuff_here
- name: cluster_stuff_here

Ansible deploy security certificates to remote servers

I'm a bit stumped on an Ansible issue. I've gotten a portion of my setup script working for my database servers, and I would like Ansible to be able to manage each server's postgresql.conf file. I currently have it pushing out an up-to-date copy of the config file, but this has presented a problem.
Our security certificates are unique to each server, and the postgresql.conf has parameters for setting these up for each server. I've currently got Ansible calculating the proper initial values for things like shared_buffers and effective_cache_size, but do not know how to get it to push a unique certificate out to each remote server, or to uniquely set the name in the config file to match the certificate name.
Are these even possible with Ansible?
I had a similar requirement recently to deploy a specific file (Java keystore) per host, along with the relevant keystore password to decrypt and use it.
For the per-host keystore password, I used host_vars: inside the group host file inventory/demoGroupName/hosts:
hostname1.com keystore_password="{{ vault_keystore_password_hostname1.com }}"
hostname2.com keystore_password="{{ vault_keystore_password_hostname2.com }}"
and then in inventory/demoGroupName/vault:
vault_keystore_password_hostname1.com=superSecurePassword
vault_keystore_password_hostname2.com=nonRepeatedPassword
(Note: the use of vault_ as a prefix is recommended in Ansible's best practices, but feel free to modify this to suit your scenario)
and then in the job, simply place {{ keystore_password }} in the relevant spot, as an example:
- name: arbitrary tasks
command: configure-keystore --password="{{ keystore_password }}"
Now in my scenario, I placed the individual keystores into the roles/role_name/files directory and copied it across by substituting in the {{ inventory_hostname }} as part of the filename, but this answer gives a better solution in my opinion. Either will work for your situation, but the latter is probably better long-term. If the name of the certificate can be made the same for all hosts, that will also simplify your situation somewhat.

Configuring FQDN for GCE instance on startup

I am trying to start a google compute engine (GCE) instance with a pre-configured FQDN. We are intending to run an application that is licensed based on the contents of /etc/hosts.
I am starting the instances using the Google Cloud SDK utility - gcloud.
I have tried setting the "hostname" key using the metadata option like so:
gcloud compute instances create mynode (standard opts) --metadata hostname=mynode.example.com
Whenever I log into the developer console, under computer, instances, I can see hostname under "Custom metadata". This appears to be a new, custome key - it has no impact on what:
http://metadata.google.internal/computeMetadata/v1/instance/hostname
returns.
I have also tried setting "instance/hostname" like the below, which causes a parsing error when using gcloud.
--metadata instance/hostname=mynode.example.com
I have successfully used the startup scripts functionality of the metadata server to run a startup script that parses the new, internal IP address of the newly created instance, updated /etc/hosts. This appears to work but doesn't feel "like the google way".
Can I configure the FQDN (specifically, a domain name, as the instance name is always the hostname) of an instance, during instance creation, using the metaserver functionality?
try this:
Go to your GCE >> VM instances panel.
stop your gce instance.
clic on the instance name.
Edit your instance, adding this values on Custom metadata fields:
Key field: hostname / Value field: your.server.hostname
Key field: startup-script / Value field: sudo -s hostnamectl set-hostname your.server.hostname
setup-example-image.png
Finally, start your instance and test with a hostnamectl command.
regards!
According to this article 'hostname' is part of the default metadata entries that provide information about your instance and it is NOT possible to manually edit any of the default metadata pairs. You can also take a look at this video from the Google Team. Within the first few minutes it is mentioned that you cannot modify default metadata pairs. As such, it does not seem like you can specify the hostname upon instance creation other than through the use of a start-up script like you've done already. It is also worth mentioning that the hostname you've specified will get deleted and auto-synced by the metadata server upon reboot unless you're using a start-up script or something that would modify it every time.
If what you're currently doing works for what you're trying to accomplish, it might be the only workaround to your scenario.
Here is a patch for /usr/share/google/set-hostname to set FQDN to GCE instance.
https://gist.github.com/yuki-takeichi/3080521322f0f1d159ea6a343e2323e6
Before you use this patch, you must set your desired FQDN in your instance's metadata by specifying hostname key.
Hostname is set each time instance's IP address is renewed by dhclient. set-hostname is just a hook script which dhclient executes and serves new IP address and internal hostame to, and modifies /etc/hosts. This patch changes the source of hostname by querying instance's metadata from metadata server.
The original set-hostname script is here:
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_config/bin/set_hostname.
Use this patch at your own risk.
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and eliminate the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
I've looked throughout this site to find answered questions and found a few things that work but with a couple solutions combined. This thread seems the place to answer.
1) echo example.com > /etc/hostname
2) add -- 127.0.1.1 example.com in /etc/hosts
3) add -- hostnamectl set-hostname
example.com -- command to /etc/rc.local script
4) uncomment /etc/dhcp/dhclient.conf line:
supersede domain-name "example.com";
5) profit.... Seems to stick after each reboot
(Note example.com is your domain name: fqdndomain.com - yourfqdndomain.org)
Also note this is for Ubuntu or Debian. Other Unix May slightly vary. I've tested this on Ubuntu 16.04
Always on the wording NOT possible to manually edit any of the default metadata pairs, how about the instant level default metadata "/scheduling"? we could set them manually as mentioned in this article