nmcli: Static routes (CIDR format vs key value format) - networkmanager

When adding a persistent static route for bond3 via nmcli it creates /etc/sysconfig/network-scripts/route-bond3 with the key based format:
nmcli connection modify bond3 +ipv4.routes "11.20.30.0/24 198.57.100.1"
nmcli connection up bond3
cat /etc/sysconfig/network-scripts/route-bond3
ADDRESS0=11.20.30.0
NETMASK0=255.255.255.0
GATEWAY0=198.57.100.1
For example the same manually configured route-bond3 file using a CIDR notation would be in the format of:
11.20.30.0/24 via 198.57.100.1 dev bond3
Why isn't nmcli creating route-bond3 with the CIDR format?
Is the key based format deemed the older format when compared to the CIDR format for route-bond3?
Thanks!

Related

Is it possible to set service with dots, dashes in docker swarm

I have a hostname with dots and dashes. I need to use that hostname as the service name.
Suppose My hostname is Prasanna.abc.in. I want to make that hostname as a service name in the docker stack file.
Docker allows you to create a service with a "DNS name component". That's alpha numeric with dashes, and up to 63 characters. See the hostname spec on wikipedia. This does not let you use underscores or dots in the name since that is not a hostname component (dots are used to separate multiple components and underscores would be invalid).
For a FQDN, consider using a label on your service instead. You'll be able to give that label the full value of the hostname, and query by that label.

Adding a custom domain name with surge.sh

I'm a newbie in domain names, DNS etc.
I'm using surge.sh for deploying my app. Now I want to add a custom domain, that I registered using transIP, and I can't get it working. I set the IP address to 45.55.110.124, as they explain here. All together, I entered the following settings:
Name: *
TTL: 1 min
Type: A
Address: 45.55.110.124
And another one, exactly the same but then using name #:
Name: #
TTL: 1 min
Type: A
Address: 45.55.110.124
I created a test page that contains hello domain, inside a simple html file. Now, I deployed the page by moving to the folder that contains the html file and doing: surge ./ mydomain.io.
I waited over 5 minutes and nothing is changing.
Now, my questions are:
What am I doing wrong?
My domain provider suggests that I also use an IPv6 address, but which one should I use for Surge?
Why is there an option of setting TTL longer than 1 minute, who wants to wait longer before their deploy comes online?
For starters, you want to use the CNAME instead of A record if possible. The reason for this is that their IP address can possibly change out from under you when infrastructure changes / updates / re-deploys. If possible, remove the A records and create CNAME records pointing to na-west1.surge.sh. instead.
Next, assuming that they want you to point to the same IP as na-west1.surge.sh resolves to, that IP is different from the documentation (possible even due to my previous explanation). You can ping the domain or use the host utility to get the current IP address:
$ host na-west1.surge.sh
na-west1.surge.sh has address 138.197.235.123
Armed with this information, try changing to CNAME records first. If this isn't possible, then use the updated IP address that you get from resolving the their CNAME.

BOSH working with Dynamic IP Addresses

What's the best way to work with dynamic IP addresses with BOSH? Currently we're setting static IP addresses for each machine we want to use, but we only really care that one of those VMs has a static IP address.
Is there a way to get information about other VMs running in the BOSH network from within a BOSH VM? Or just get dynamic information about the deployment from within the VM? Such as which machines are currently running on which IP addresses?
It sounds like the recent introduction of "links" is worth a look for your use case.
Previously, if network communication was required between jobs, release authors had to add job properties to accept other job’s network addresses (e.g. a db_ips property). Operators then had to explicitly assign static IPs or DNS names for each instance group and fill out network address properties
This lets each job either expose or consume connections.
i.e. a DB exposes its connection
# Database job spec file.
name: database_job
# ...
provides:
- name: database_conn
type: conn
# Links always carry certain information, like its address and AZ.
# Optionally, the provider can specify other properties in the link.
properties:
- port
- adapter
- username
- password
- name
And a Application can consume it.
# Application job spec file.
name: application_job
# ...
consumes:
- name: database_conn
type: conn
The consuming job is provided with extra properties to use these addresses/info as needed, i.e.
#!/bin/bash
# Application's templated control script.
# ...
export DATABASE_HOST="<%= link('database_conn').instances[0].address %>"
export DATABASE_PORT="<%= link('database_conn').p('port') %>"

Configuring FQDN for GCE instance on startup

I am trying to start a google compute engine (GCE) instance with a pre-configured FQDN. We are intending to run an application that is licensed based on the contents of /etc/hosts.
I am starting the instances using the Google Cloud SDK utility - gcloud.
I have tried setting the "hostname" key using the metadata option like so:
gcloud compute instances create mynode (standard opts) --metadata hostname=mynode.example.com
Whenever I log into the developer console, under computer, instances, I can see hostname under "Custom metadata". This appears to be a new, custome key - it has no impact on what:
http://metadata.google.internal/computeMetadata/v1/instance/hostname
returns.
I have also tried setting "instance/hostname" like the below, which causes a parsing error when using gcloud.
--metadata instance/hostname=mynode.example.com
I have successfully used the startup scripts functionality of the metadata server to run a startup script that parses the new, internal IP address of the newly created instance, updated /etc/hosts. This appears to work but doesn't feel "like the google way".
Can I configure the FQDN (specifically, a domain name, as the instance name is always the hostname) of an instance, during instance creation, using the metaserver functionality?
try this:
Go to your GCE >> VM instances panel.
stop your gce instance.
clic on the instance name.
Edit your instance, adding this values on Custom metadata fields:
Key field: hostname / Value field: your.server.hostname
Key field: startup-script / Value field: sudo -s hostnamectl set-hostname your.server.hostname
setup-example-image.png
Finally, start your instance and test with a hostnamectl command.
regards!
According to this article 'hostname' is part of the default metadata entries that provide information about your instance and it is NOT possible to manually edit any of the default metadata pairs. You can also take a look at this video from the Google Team. Within the first few minutes it is mentioned that you cannot modify default metadata pairs. As such, it does not seem like you can specify the hostname upon instance creation other than through the use of a start-up script like you've done already. It is also worth mentioning that the hostname you've specified will get deleted and auto-synced by the metadata server upon reboot unless you're using a start-up script or something that would modify it every time.
If what you're currently doing works for what you're trying to accomplish, it might be the only workaround to your scenario.
Here is a patch for /usr/share/google/set-hostname to set FQDN to GCE instance.
https://gist.github.com/yuki-takeichi/3080521322f0f1d159ea6a343e2323e6
Before you use this patch, you must set your desired FQDN in your instance's metadata by specifying hostname key.
Hostname is set each time instance's IP address is renewed by dhclient. set-hostname is just a hook script which dhclient executes and serves new IP address and internal hostame to, and modifies /etc/hosts. This patch changes the source of hostname by querying instance's metadata from metadata server.
The original set-hostname script is here:
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_config/bin/set_hostname.
Use this patch at your own risk.
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and eliminate the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
I've looked throughout this site to find answered questions and found a few things that work but with a couple solutions combined. This thread seems the place to answer.
1) echo example.com > /etc/hostname
2) add -- 127.0.1.1 example.com in /etc/hosts
3) add -- hostnamectl set-hostname
example.com -- command to /etc/rc.local script
4) uncomment /etc/dhcp/dhclient.conf line:
supersede domain-name "example.com";
5) profit.... Seems to stick after each reboot
(Note example.com is your domain name: fqdndomain.com - yourfqdndomain.org)
Also note this is for Ubuntu or Debian. Other Unix May slightly vary. I've tested this on Ubuntu 16.04
Always on the wording NOT possible to manually edit any of the default metadata pairs, how about the instant level default metadata "/scheduling"? we could set them manually as mentioned in this article

Is there a api for ganglia?

Hello I would like to enquire if there is an API that can be used to retrieve Ganglia stats for all clients from a single ganglia server?
The Ganglia gmetad component listens on ports 8651 and 8652 by default and replies with XML metric data. The XML data type definition can be seen on GitHub here.
Gmetad needs to be configured to allow XML replies to be sent to specific hosts or all hosts. By default only localhost is allowed. This can be changed in /etc/ganglia/gmetad.conf.
Connecting to port 8651 will get you a default XML report of all metrics as a response.
Port 8652 is the interactive port which allows for customized queries. Gmetad will recognize raw text queries sent to this port, i.e. not HTTP requests.
Here are examples of some queries:
/?filter=summary (returns a summary of the whole grid, i.e. all clusters)
/clusterName (returns raw data of a cluster called "clusterName")
/clusterName/hostName (returns raw data for host "hostName" in cluster "clusterName")
/clusterName?filter=summary (returns a summary of only cluster "clusterName")
The ?filter=summary parameter changes the output to contain the sum of each metric value over all hosts. The number of hosts is also provided for each metric so that the mean value may be calculated.
Yes, there's an API for Ganglia: https://github.com/guardian/ganglia-api
You should check this presentation from 2012 Velocity Europe - it was really a great talk: http://www.guardian.co.uk/info/developer-blog/2012/oct/04/winning-the-metrics-battle
There is also an API you can install from pypi with 'pip install gangliarest' and sets up a configurable API backed with a Redis cache and indexer to improve performance.
https://pypi.python.org/pypi/gangliarest