I used to send data to Influxdb using telegraf.
After that, I changed the hostname and erased all the data, but the host was not erased.
Is there a way to delete unused hosts?
enter image description here
This is the host you are using.
enter image description here
All data on hosts that are not in use are deleted, but the host remains
import "influxdata/influxdb/schema"
schema.tagValues(bucket: "${bucket}", tag: "host")
enter image description here
If search with the host tag in Grafana, unused hosts are also displayed.
Related
I am looking at the stackstorm docker-compose file, and within it almost all containers have a line dns_search: . According to docker-compose documentation, dns_search is for the purpose of configuring search domains.
I am used to seeing this in context of transparently adding a domain to unqualified short domains. For example if I add dns_search: mydomain.com, I would expect "host1" to transparently resolve as "host1.mydomain.com".
I have never seen this set as a single dot . before. What is the effect/purpose of doing this configuration?
I'm posting the answer from the Stackstorm Git project issue see comment/"dns_search: .". Paraphrasing: it was useful in old versions of Docker before 2017, before the ndots configuration was available. Nowadays that configuration has no impact, and in fact has been removed from the stackstorm docker-compose file.
I believe this is because all domain names end in . under the hood, but browsers and other software abstracts this out.
For example. under the hood www.google.com is actually www.google.com.
So, in the docker-compose file, this would essentially be saying "Find me any domain"
A bit more detail on why there's an extra dot, if you're interested:
Domain name resolution is heirachical, reading right to left, with each block, separated by a ., being a step in the process. A DNS resolver will first find a source of ., which will be able to return the address for a resolver for the next block, until it reaches the final block, where it returns the full DNS record.
Extending EdwardTeach's answer:
#ytjohn effectively said they did in the past because putting dns_search: . configures the DNS search domains to be only . instead of inheriting the host ones. I can't confirm that because I didn't test it.
Now, I tested what docker-compose does today, and in a container, cat /etc/resolve.conf returns:
nameserver 127.0.0.11
options ndots:0
Where options ndots:0 is (from resolv.conf docs):
ndots:n
Sets a threshold for the number of dots which must
appear in a name given to res_query(3) (see
resolver(3)) before an initial absolute query will
be made. The default for n is 1, meaning that if
there are any dots in a name, the name will be
tried first as an absolute name before any search
list elements are appended to it. The value for
this option is silently capped to 15.
With ndots:0, all domains will be attempted using the absolute name first, only then using the search list.
How I came to this conclusion
The Github comment:
If you don't set this dns_search: ., then whatever the host has in search in their /etc/resolv.conf will get put into your container's /etc/resolv.conf.
This doesn't happen. My host has search domain[0]: broadband (macOS command: scutil --dns), and in docker containers, it doesn't show broadband (linux command: cat /etc/resolv.conf). Instead, it says options ndots:0
dns_search docs:
dns defines custom DNS search domains to set on container network interface configuration. Can be a single value or a list.
What is a DNS search domain?
It is the DNS service used to resolve hostnames that are not fully qualified, e.g. hostname will try hostname.example.com then hostname.website.com if your search domains list was example.com, website.com. More information on https://superuser.com/a/184366
In another repo (crossdock), their dockerfile had the comment:
`dns_search: . # Ensures unified DNS config.`
When I attempt to connect to an instance of a PostgreSQL database I've created as per the AWS "Create and Connect to a PostgreSQL Database with Amazon RDS" tutorial located here (https://aws.amazon.com/getting-started/tutorials/create-connect-postgresql-db/), I receive an error that reads:
[08001] The connection attempt failed.
java.net.SocketTimeoutException: connect timed out.
The database is set to allow incoming and outgoing traffic on all ports and from all IP addresses. I am completely at a loss as to how to get this working and have reached out to AWS Support for their input, but, as yet, all I've done is follow the directions prescribed by the AWS tutorial--to no avail.
Does anyone know what might be the issue?
Edit: I should mention that all of my, host URL, port number, database name, etc. have been entered correctly into DataGrip, so none of the above are the issue.
All right--I've figured it out.
First off, #Mark B was right--the issue was that I hadn't yet made the database itself publicly accessible via the VPC security group of which it was a member. To do this, from the database detail screen in AWS, I:
clicked (what for me was the one and only) link beneath the "VPC security groups" of the database's dashboard, which directed me to the EC2 Security Groups screen
clicked the security group link related to my database, which directed me to that group's detail page
clicked the "Edit inbound rules" button which directed me to the "Edit inbound rules" screen
clicked the "Add rule" button, which caused a table row containing the following columns: "Type", "Protocol," "Port Range," "Source," "Description - optional"
selected "PostgreSQL" for the "Type" column, which caused the values of "TCP" and "5432" to populate the "Protocol" and "Port range" columns respectively, entered my machine's IP address ("123.456.789.012/32"--no quotes and no parentheses), and left "Description - optional" blank, because, well, it's optional.
Finally, I guess I'd forgotten to explicitly name the database, and so my attempts to enter what for me was ostensibly the database's name (that is, "database-1") resulted in a connection error indicating that "database-1" does not exist. So, for the sake of ease and simply verifying my database connection, I entered "postgres" as the database name in my database client (I'm presently using DataGrip), because "postgres" is the de facto name of a postgreSQL database.
And that should work. I'm sure this is all no-brainer stuff to those more experienced with AWS, but it's new to me and presumably to many others.
Thanks again, #Mark B, for sending me down the right path.
I'm a newbie in domain names, DNS etc.
I'm using surge.sh for deploying my app. Now I want to add a custom domain, that I registered using transIP, and I can't get it working. I set the IP address to 45.55.110.124, as they explain here. All together, I entered the following settings:
Name: *
TTL: 1 min
Type: A
Address: 45.55.110.124
And another one, exactly the same but then using name #:
Name: #
TTL: 1 min
Type: A
Address: 45.55.110.124
I created a test page that contains hello domain, inside a simple html file. Now, I deployed the page by moving to the folder that contains the html file and doing: surge ./ mydomain.io.
I waited over 5 minutes and nothing is changing.
Now, my questions are:
What am I doing wrong?
My domain provider suggests that I also use an IPv6 address, but which one should I use for Surge?
Why is there an option of setting TTL longer than 1 minute, who wants to wait longer before their deploy comes online?
For starters, you want to use the CNAME instead of A record if possible. The reason for this is that their IP address can possibly change out from under you when infrastructure changes / updates / re-deploys. If possible, remove the A records and create CNAME records pointing to na-west1.surge.sh. instead.
Next, assuming that they want you to point to the same IP as na-west1.surge.sh resolves to, that IP is different from the documentation (possible even due to my previous explanation). You can ping the domain or use the host utility to get the current IP address:
$ host na-west1.surge.sh
na-west1.surge.sh has address 138.197.235.123
Armed with this information, try changing to CNAME records first. If this isn't possible, then use the updated IP address that you get from resolving the their CNAME.
I have a Parse Server running on top of a MongoDB, and that's running 100% fine on my Dev Server which is hosted on DigitalOcean. Here I'm able to send GET requests to my server to obtain the image, as well as access the image via it's Parse-Dashboard.
I cloned that droplet to set up a Production Server, and everything is running fine... Except, I can't access the images from Parse that were either cloned from the Dev Server, or ones that I uploaded after I initialized the new Production server. I'm able to send GET requests to obtain all other fields, except for the image files. I also can't access the image file via the Parse-Dashboard - it returns a 404 - Oh no, we can't find that page! error, on the following URL: http://server.ip/parse/files/ProdServer/de632aeb61f7265926e554fabfb25180_image1.png
Other key things to note:
The Dev Server is hosted on a domain that has a SSL; could it be an SSL issue?
I'm initializing the parse-dashboard with the --allowInsecureHTTP flag
Everything (even before the SSL) was working on the Dev Server beforehand
all packages + dependencies are up-to-date
tl;dr
How do I access the image files from my Parse Server, via Parse-Dashboard or GET request?
A couple methods I tried... Since this was an elaborate process for me, allow me to document the methods I tried to resolve this issue:
The first issue was, do the files exist? If so, where are they stored?
By accessing my parse-dashboard on port 4040, I tried to view the image path via the URL... So I knew it existed somewhere, and I recursively searched my entire server for the file path, but to no avail.
Then with more research I found that any file over 16MB gets converted into a GridFS object i.e. images are stored in my MongoDB. How you access these objects are through a utility called mongofiles.
By running mongofiles -d dbname list I was able to view in a list view all of the images stored on my Parse-Server.
just to ensure the images weren't corrupt...
I also sftp the images over into my local machine, and fortunately I could view them. So the problem was that the images weren't being served correctly...
The next issue was, how come the images aren't being served correctly?
So my parse-dashboard was being served on port 4040, but for some reason, my image file path within their respective URLs were being prefaced with the same port 4040... It turns out that within my Parse-Server config, the parse-server URL was pointing to port 4040, but was being served on ****. By changing my URL back to ****, my images were able to properly render on my parse-dashboard, and I was able to send http requests for the images as well :)
tl;dr make sure your image file path is being served on the same port where your parse-server is being served
I am trying to start a google compute engine (GCE) instance with a pre-configured FQDN. We are intending to run an application that is licensed based on the contents of /etc/hosts.
I am starting the instances using the Google Cloud SDK utility - gcloud.
I have tried setting the "hostname" key using the metadata option like so:
gcloud compute instances create mynode (standard opts) --metadata hostname=mynode.example.com
Whenever I log into the developer console, under computer, instances, I can see hostname under "Custom metadata". This appears to be a new, custome key - it has no impact on what:
http://metadata.google.internal/computeMetadata/v1/instance/hostname
returns.
I have also tried setting "instance/hostname" like the below, which causes a parsing error when using gcloud.
--metadata instance/hostname=mynode.example.com
I have successfully used the startup scripts functionality of the metadata server to run a startup script that parses the new, internal IP address of the newly created instance, updated /etc/hosts. This appears to work but doesn't feel "like the google way".
Can I configure the FQDN (specifically, a domain name, as the instance name is always the hostname) of an instance, during instance creation, using the metaserver functionality?
try this:
Go to your GCE >> VM instances panel.
stop your gce instance.
clic on the instance name.
Edit your instance, adding this values on Custom metadata fields:
Key field: hostname / Value field: your.server.hostname
Key field: startup-script / Value field: sudo -s hostnamectl set-hostname your.server.hostname
setup-example-image.png
Finally, start your instance and test with a hostnamectl command.
regards!
According to this article 'hostname' is part of the default metadata entries that provide information about your instance and it is NOT possible to manually edit any of the default metadata pairs. You can also take a look at this video from the Google Team. Within the first few minutes it is mentioned that you cannot modify default metadata pairs. As such, it does not seem like you can specify the hostname upon instance creation other than through the use of a start-up script like you've done already. It is also worth mentioning that the hostname you've specified will get deleted and auto-synced by the metadata server upon reboot unless you're using a start-up script or something that would modify it every time.
If what you're currently doing works for what you're trying to accomplish, it might be the only workaround to your scenario.
Here is a patch for /usr/share/google/set-hostname to set FQDN to GCE instance.
https://gist.github.com/yuki-takeichi/3080521322f0f1d159ea6a343e2323e6
Before you use this patch, you must set your desired FQDN in your instance's metadata by specifying hostname key.
Hostname is set each time instance's IP address is renewed by dhclient. set-hostname is just a hook script which dhclient executes and serves new IP address and internal hostame to, and modifies /etc/hosts. This patch changes the source of hostname by querying instance's metadata from metadata server.
The original set-hostname script is here:
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_config/bin/set_hostname.
Use this patch at your own risk.
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and eliminate the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
I've looked throughout this site to find answered questions and found a few things that work but with a couple solutions combined. This thread seems the place to answer.
1) echo example.com > /etc/hostname
2) add -- 127.0.1.1 example.com in /etc/hosts
3) add -- hostnamectl set-hostname
example.com -- command to /etc/rc.local script
4) uncomment /etc/dhcp/dhclient.conf line:
supersede domain-name "example.com";
5) profit.... Seems to stick after each reboot
(Note example.com is your domain name: fqdndomain.com - yourfqdndomain.org)
Also note this is for Ubuntu or Debian. Other Unix May slightly vary. I've tested this on Ubuntu 16.04
Always on the wording NOT possible to manually edit any of the default metadata pairs, how about the instant level default metadata "/scheduling"? we could set them manually as mentioned in this article