I am looking at the stackstorm docker-compose file, and within it almost all containers have a line dns_search: . According to docker-compose documentation, dns_search is for the purpose of configuring search domains.
I am used to seeing this in context of transparently adding a domain to unqualified short domains. For example if I add dns_search: mydomain.com, I would expect "host1" to transparently resolve as "host1.mydomain.com".
I have never seen this set as a single dot . before. What is the effect/purpose of doing this configuration?
I'm posting the answer from the Stackstorm Git project issue see comment/"dns_search: .". Paraphrasing: it was useful in old versions of Docker before 2017, before the ndots configuration was available. Nowadays that configuration has no impact, and in fact has been removed from the stackstorm docker-compose file.
I believe this is because all domain names end in . under the hood, but browsers and other software abstracts this out.
For example. under the hood www.google.com is actually www.google.com.
So, in the docker-compose file, this would essentially be saying "Find me any domain"
A bit more detail on why there's an extra dot, if you're interested:
Domain name resolution is heirachical, reading right to left, with each block, separated by a ., being a step in the process. A DNS resolver will first find a source of ., which will be able to return the address for a resolver for the next block, until it reaches the final block, where it returns the full DNS record.
Extending EdwardTeach's answer:
#ytjohn effectively said they did in the past because putting dns_search: . configures the DNS search domains to be only . instead of inheriting the host ones. I can't confirm that because I didn't test it.
Now, I tested what docker-compose does today, and in a container, cat /etc/resolve.conf returns:
nameserver 127.0.0.11
options ndots:0
Where options ndots:0 is (from resolv.conf docs):
ndots:n
Sets a threshold for the number of dots which must
appear in a name given to res_query(3) (see
resolver(3)) before an initial absolute query will
be made. The default for n is 1, meaning that if
there are any dots in a name, the name will be
tried first as an absolute name before any search
list elements are appended to it. The value for
this option is silently capped to 15.
With ndots:0, all domains will be attempted using the absolute name first, only then using the search list.
How I came to this conclusion
The Github comment:
If you don't set this dns_search: ., then whatever the host has in search in their /etc/resolv.conf will get put into your container's /etc/resolv.conf.
This doesn't happen. My host has search domain[0]: broadband (macOS command: scutil --dns), and in docker containers, it doesn't show broadband (linux command: cat /etc/resolv.conf). Instead, it says options ndots:0
dns_search docs:
dns defines custom DNS search domains to set on container network interface configuration. Can be a single value or a list.
What is a DNS search domain?
It is the DNS service used to resolve hostnames that are not fully qualified, e.g. hostname will try hostname.example.com then hostname.website.com if your search domains list was example.com, website.com. More information on https://superuser.com/a/184366
In another repo (crossdock), their dockerfile had the comment:
`dns_search: . # Ensures unified DNS config.`
Related
As I am new to coredns configurations in kubernetes and I'm trying to explore plugins provided by coredns in kubernetes. I see a plugin named local which will respond with a reply to local requests. But I could not understand a use-case where this plugin will be exactly useful for. Can someone explain with an example how it can be make use of? Also in unbound configuration man page, I see an option called local-zone: .
local-zone:
Configure a local zone. The type determines the answer to give if there is no match from local-data. The types are deny, refuse, static, transparent, redirect, nodefault, typetransparent, and are explained below. After that the default settings are listed. Use local-data: to enter data into the local zone. Answers for local zones are authoritative DNS answers. By default the zones are class IN.
nodefault:
Used to turn off default contents for AS112 zones. The other types also turn off default contents for the zone. The 'nodefault' option has no other effect than turning off default contents for the given zone.
Is this local plugin behaves similar to this unbound local-zone option? If not, Is there any plugin which act similar to local-zone in unbound ? I am expecting a coredns plugin to behave similar to local-data in unbound particularly for nodefault type in local-data. eg: local-zone: nodefault. It would be really helpful if someone helps me to clear out this. Thanks in advance!!!
As the author of the local plugin for CoreDNS explains, localhost.<searchpath> queries are hitting coredns, which is wrong. So, he wrote this plugin to intercept localhost.<'domain'> queries and return the correct response.
From the official CoreDNS github web page:
local will respond with a basic reply to a "local request". Local request are defined to be names in the following zones: localhost, 0.in-addr.arpa, 127.in-addr.arpa and 255.in-addr.arpa and any query asking for localhost.<domain>.
With local enabled any query falling under these zones will get a reply. This prevents the query from "escaping" to the internet and putting strain on external infrastructure.
You can check the code of the local plugin here. I don't see similar to the unbound local-zone nodefault functionality there.
See all in-tree plugins for CoreDNS here.
This is kind of a weird complicated question.
Context:
I have a bunch of docker containers that need to be routed to from haproxy dynamically. They are each running on different ports on the machine, and are stored in environment variables like this:
a=9873
b=9874
c=9875
These are available to the haproxy server. The request path that comes in will be in the form like this example:
/api/a/action
From that, the taks is as follows:
The /api needs to be removed from the path.
The /a refers to the service, so the environment variable for a needs to be retrieved to get the port of the server
The request needs to be routed to localhost:9873/a/action where the port, 9873, is the environment variable that is the value from the path in the beginning (after removing /api) and then the path is simply appended onto the request (which is the /a/action that remains after removing the /api.
My current config looks like this:
backend api
reqrep ^([^\ ]*\ /)api[/]?(.*) \1\2
server api_server localhost:9871
All this config is doing is removing the /api from the path of the request and sending it to a static port, 9871. *I need this port to be the value held by the environment variable of the same name as the first element in the path (the /a above) and the rest (passing the remaining path) is already working.*
I also would like to be able to get the environment variable of the name prefix_a, where the path will have the name /a, but I need to prepend one common prefix prefix_ to get the environment variable. This can be a separate question or search though, unless it's simple to just put that into the solution.
Please let me know if I can clarify or give more information that might help solve the problem.
(I've done a heck a lot of googling. Here are some related urls but not quite the answer I need:
https://gist.github.com/meineerde/8bea63e64fc47f9a67c0
Dynamic routing to backend based on context path in HAProxy
How can I set up HAProxy to a backend based on a value in the url?
Haproxy route and rewrite based on URI path
haproxy: get the host name
https://serverfault.com/questions/818937/haproxy-is-giving-me-problems-with-regex-replace-is-this-a-bug-or-am-i-doing-so
https://serverfault.com/questions/668025/how-to-use-environment-variable-in-haproxy
http://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.2
How do I set a dynamic variable in HAProxy?
use environment variables in haproxy
I'm a newbie in domain names, DNS etc.
I'm using surge.sh for deploying my app. Now I want to add a custom domain, that I registered using transIP, and I can't get it working. I set the IP address to 45.55.110.124, as they explain here. All together, I entered the following settings:
Name: *
TTL: 1 min
Type: A
Address: 45.55.110.124
And another one, exactly the same but then using name #:
Name: #
TTL: 1 min
Type: A
Address: 45.55.110.124
I created a test page that contains hello domain, inside a simple html file. Now, I deployed the page by moving to the folder that contains the html file and doing: surge ./ mydomain.io.
I waited over 5 minutes and nothing is changing.
Now, my questions are:
What am I doing wrong?
My domain provider suggests that I also use an IPv6 address, but which one should I use for Surge?
Why is there an option of setting TTL longer than 1 minute, who wants to wait longer before their deploy comes online?
For starters, you want to use the CNAME instead of A record if possible. The reason for this is that their IP address can possibly change out from under you when infrastructure changes / updates / re-deploys. If possible, remove the A records and create CNAME records pointing to na-west1.surge.sh. instead.
Next, assuming that they want you to point to the same IP as na-west1.surge.sh resolves to, that IP is different from the documentation (possible even due to my previous explanation). You can ping the domain or use the host utility to get the current IP address:
$ host na-west1.surge.sh
na-west1.surge.sh has address 138.197.235.123
Armed with this information, try changing to CNAME records first. If this isn't possible, then use the updated IP address that you get from resolving the their CNAME.
I am trying to start a google compute engine (GCE) instance with a pre-configured FQDN. We are intending to run an application that is licensed based on the contents of /etc/hosts.
I am starting the instances using the Google Cloud SDK utility - gcloud.
I have tried setting the "hostname" key using the metadata option like so:
gcloud compute instances create mynode (standard opts) --metadata hostname=mynode.example.com
Whenever I log into the developer console, under computer, instances, I can see hostname under "Custom metadata". This appears to be a new, custome key - it has no impact on what:
http://metadata.google.internal/computeMetadata/v1/instance/hostname
returns.
I have also tried setting "instance/hostname" like the below, which causes a parsing error when using gcloud.
--metadata instance/hostname=mynode.example.com
I have successfully used the startup scripts functionality of the metadata server to run a startup script that parses the new, internal IP address of the newly created instance, updated /etc/hosts. This appears to work but doesn't feel "like the google way".
Can I configure the FQDN (specifically, a domain name, as the instance name is always the hostname) of an instance, during instance creation, using the metaserver functionality?
try this:
Go to your GCE >> VM instances panel.
stop your gce instance.
clic on the instance name.
Edit your instance, adding this values on Custom metadata fields:
Key field: hostname / Value field: your.server.hostname
Key field: startup-script / Value field: sudo -s hostnamectl set-hostname your.server.hostname
setup-example-image.png
Finally, start your instance and test with a hostnamectl command.
regards!
According to this article 'hostname' is part of the default metadata entries that provide information about your instance and it is NOT possible to manually edit any of the default metadata pairs. You can also take a look at this video from the Google Team. Within the first few minutes it is mentioned that you cannot modify default metadata pairs. As such, it does not seem like you can specify the hostname upon instance creation other than through the use of a start-up script like you've done already. It is also worth mentioning that the hostname you've specified will get deleted and auto-synced by the metadata server upon reboot unless you're using a start-up script or something that would modify it every time.
If what you're currently doing works for what you're trying to accomplish, it might be the only workaround to your scenario.
Here is a patch for /usr/share/google/set-hostname to set FQDN to GCE instance.
https://gist.github.com/yuki-takeichi/3080521322f0f1d159ea6a343e2323e6
Before you use this patch, you must set your desired FQDN in your instance's metadata by specifying hostname key.
Hostname is set each time instance's IP address is renewed by dhclient. set-hostname is just a hook script which dhclient executes and serves new IP address and internal hostame to, and modifies /etc/hosts. This patch changes the source of hostname by querying instance's metadata from metadata server.
The original set-hostname script is here:
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_config/bin/set_hostname.
Use this patch at your own risk.
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and eliminate the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
I've looked throughout this site to find answered questions and found a few things that work but with a couple solutions combined. This thread seems the place to answer.
1) echo example.com > /etc/hostname
2) add -- 127.0.1.1 example.com in /etc/hosts
3) add -- hostnamectl set-hostname
example.com -- command to /etc/rc.local script
4) uncomment /etc/dhcp/dhclient.conf line:
supersede domain-name "example.com";
5) profit.... Seems to stick after each reboot
(Note example.com is your domain name: fqdndomain.com - yourfqdndomain.org)
Also note this is for Ubuntu or Debian. Other Unix May slightly vary. I've tested this on Ubuntu 16.04
Always on the wording NOT possible to manually edit any of the default metadata pairs, how about the instant level default metadata "/scheduling"? we could set them manually as mentioned in this article
I have an HAProxy install which was configured by someone who left the company. It runs on Ubuntu 10.04 and it seems to use 3 configuration files in the directory /etc/haproxy
haproxy.cfg
haproxy.http.cfg
haproxy.https.cfg
I don't see the point in using the haproxy.https.cfg file as I believe (in our configuration) it can all be configured from a single haproxy.http.cfg file but when I remove that httpS file it complains bitterly and refuses to run. My question
Is this the standard configuration haproxy uses or if not, I can't find a reference to the "S" file anywhere. Can anyone suggest how HAProxy concludes it should use it?
Thanks
The very answer to your question: your haproxy is simply launched with those three config files ( -f haproxy.cfg -f haproxy.http.cfg -f haproxy.https.cfg, maybe from /etc/init.d/haproxy but mileage varies depending on your distribution ).
If you remove the file, of course it will complain.
This is not particularly standard, but ain't bad either, it helps structuring the conf rather than having a very long file.
The task of the .https version will certainly be to redirect the https traffic towards a service that can handle HTTPS (stunnel or nginx usually), since haproxy cannot terminate ssl connections. (stunnel has to be patched, see on the haproxy page)
If you want you can merge those files into one or two, just find out how haproxy is launched (check for init.d or let us know which distribution) and fix it appropriately.
I believe that it is only /etc/haproxy/haproxy.cfg that is used by default.
This may be of use to you (1.4 configuration reference):
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt