ServiceAddress is empty when using Consul Registrator - service

[
{
"Node": "consul-staging-a-1.org",
"Address": "10.0.11.221",
"ServiceID": "mesos-slave-staging-a-1.org:determined_bartik:5000",
"ServiceName": "service1",
"ServiceTags": null,
"ServiceAddress": "",
"ServicePort": 4003
},
{
"Node": "consul-staging-a-1.org",
"Address": "10.0.11.221",
"ServiceID": "mesos-slave-staging-a-1.org:angry_hypatia:5000",
"ServiceName": "service1",
"ServiceTags": null,
"ServiceAddress": "",
"ServicePort": 4007
}
]
This is what I get from querying the Consul service API (/v1/catalog/service/service1).
Commands that I used to start registrator and services:
docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h $HOSTNAME gliderlabs/registrator consul://consul-staging-a-1.org:8500
docker run -d -p 4003:5000 -e "SERVICE_NAME=service1" docker-training/hello-world
docker run -d -p 4007:5000 -e "SERVICE_NAME=service1" docker-training/hello-world
Any steps I'm doing wrong? How do you assign the hostname to ServiceAddress field?

I had similar issue and found that before v6 Consul Registrator leaves ServiceAddress empty, but in v6 (at the moment the latest) it is "0.0.0.0". I've tried to use "-ip" option but it did not help, somehow it assigns internal IP address of the container. Found related issues at:
https://github.com/gliderlabs/registrator/issues/240
In my case I fixed it by binding container to IP address like:
docker run -d -p 10.0.0.3:4003:5000 -e "SERVICE_NAME=service1" docker-training/hello-world

Related

How can I print an Ansible vaulted variable that includes a Kubernetes secret from the CLI?

I have a Ansible group_vars directory with the following file within it:
$ cat inventory/group_vars/env1
...
...
ldap_config: !vault |
$ANSIBLE_VAULT;1.1;AES256
31636161623166323039356163363432336566356165633232643932623133643764343134613064
6563346430393264643432636434356334313065653537300a353431376264333463333238383833
31633664303532356635303336383361386165613431346565373239643431303235323132633331
3561343765383538340a373436653232326632316133623935333739323165303532353830386532
39616232633436333238396139323631633966333635393431373565643339313031393031313836
61306163333539616264353163353535366537356662333833653634393963663838303230386362
31396431636630393439306663313762313531633130326633383164393938363165333866626438
...
...
This Ansible encrypted string has a Kubernetes secret encapsulated within it. A base64 blob that looks something like this:
IyMKIyBIb3N0IERhdGFiYXNlCiMKIyBsb2NhbGhvc3QgaXMgdXNlZCB0byBjb25maWd1cmUgdGhlIGxvb3BiYWNrIGludGVyZmFjZQojIHdoZW4gdGhlIHN5c3RlbSBpcyBib290aW5nLiAgRG8gbm90IGNoYW5nZSB0aGlzIGVudHJ5LgojIwoxMjcuMC4wLjEJbG9jYWxob3N0CjI1NS4yNTUuMjU1LjI1NQlicm9hZGNhc3Rob3N0Cjo6MSAgICAgICAgICAgICBsb2NhbGhvc3QKIyBBZGRlZCBieSBEb2NrZXIgRGVza3RvcAojIFRvIGFsbG93IHRoZSBzYW1lIGt1YmUgY29udGV4dCB0byB3b3JrIG9uIHRoZSBob3N0IGFuZCB0aGUgY29udGFpbmVyOgoxMjcuMC4wLjEga3ViZXJuZXRlcy5kb2NrZXIuaW50ZXJuYWwKIyBFbmQgb2Ygc2VjdGlvbgo=
How can I decrypt this in a single CLI?
We can use an Ansible adhoc command to retrieve the variable of interest, ldap_config. To start we're going to use this adhoc to retrieve the Ansible encrypted vault string:
$ ansible -i "localhost," all \
-m debug \
-a 'msg="{{ ldap_config }}"' \
--vault-password-file=~/.vault_pass.txt \
-e#inventory/group_vars/env1
localhost | SUCCESS => {
"msg": "ABCD......."
Make note that we're:
using the debug module and having it print the variable, msg={{ ldap_config }}
giving ansible the path to the secret to decrypt encrypted strings
using the notation -e#< ...path to file...> to pass the file with the encrypted vault variables
Now we can use Jinja2 filters to do the rest of the parsing:
$ ansible -i "localhost," all \
-m debug \
-a 'msg="{{ ldap_config | b64decode | from_yaml }}"' \
--vault-password-file=~/.vault_pass.txt \
-e#inventory/group_vars/env1
localhost | SUCCESS => {
"msg": {
"apiVersion": "v1",
"bindDN": "uid=readonly,cn=users,cn=accounts,dc=mydom,dc=com",
"bindPassword": "my secret password to ldap",
"ca": "",
"insecure": true,
"kind": "LDAPSyncConfig",
"rfc2307": {
"groupMembershipAttributes": [
"member"
],
"groupNameAttributes": [
"cn"
],
"groupUIDAttribute": "dn",
"groupsQuery": {
"baseDN": "cn=groups,cn=accounts,dc=mydom,dc=com",
"derefAliases": "never",
"filter": "(objectclass=groupOfNames)",
"scope": "sub"
},
"tolerateMemberNotFoundErrors": false,
"tolerateMemberOutOfScopeErrors": false,
"userNameAttributes": [
"uid"
],
"userUIDAttribute": "dn",
"usersQuery": {
"baseDN": "cn=users,cn=accounts,dc=mydom,dc=com",
"derefAliases": "never",
"scope": "sub"
}
},
"url": "ldap://192.168.1.10:389"
}
}
NOTE: The above section -a 'msg="{{ ldap_config | b64decode | from_yaml }}" is what's doing the heavy lifting in terms of converting from Base64 to YAML.
References
How to run Ansible without hosts file
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#filters-for-formatting-data
Base64 Decode String in jinja
How to decrypt string with ansible-vault 2.3.0
If you need a one liner that works with any yaml file (not only in inventory) containing inlined vault vars, and if you are ready to install a pip package for that, there is a solution using yq, a yaml processor built on top of jq
prerequesite: Install yq
pip install yq
Usage
You can get your result with the following command:
yq -r .ldapconfig inventory/group_vars/env1 | ansible_vault decrypt
If you need to type your vault pass interactively, don't forget to add the relevant option
yq -r .ldapconfig inventory/group_vars/env1 | ansible_vault --ask-vault-pass decrypt
Note: the -r option to yq is mandatory to get a raw result without the quotation marks around the value.

How to update the origination_urls when creating a new Trunk using twilio API

Thanks to this tutorial: https://www.twilio.com/docs/sip-trunking/api/trunks#action-create I am able to CRUD create, read, update and delete trunks on my Twilio account.
To create a new trunk I do it like so:
curl -XPOST https://trunking.twilio.com/v1/Trunks \
-d "FriendlyName=MyTrunk" \
-u '{twilio account sid}:{twilio auth token}'
and this is the response I get when creating a new trunk:
{
"trunks": [
{
"sid": "TKfa1e5a85f63bfc475c2c753c0f289932",
"account_sid": "ACxxx",
....
....
"date_updated": "2015-09-02T23:23:11Z",
"url": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932",
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
"credential_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/CredentialLists",
"ip_access_control_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/IpAccessControlLists",
"phone_numbers": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/PhoneNumbers"
}
}],
"meta": {
"page": 0,
"page_size": 50,
... more
}
}
What I am interested from the response is:
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
Now if I perform a get command on that link like:
curl -G "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls" -u '{twilio account sid}:{twilio auth token}'
I get back this:
{
"meta":
{
"page": 0,
"page_size": 50,
"first_page_url":
....
},
"origination_urls": []
}
Now my goal is to update the origination_urls. So using the same approach I used to update a trunk I have tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=sip:200#somedomain.com" \
-u '{twilio account sid}:{twilio auth token}'
But that fails. I have also tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=['someUrl']" \
-u '{twilio account sid}:{twilio auth token}'
and that fails too. How can I update the origination_urls?
I was missing to add Priority, FriendlyName, SipUrl, Weight and Enabled on my post request. I finally got it to work by doing:
curl -XPOST "https://trunking.twilio.com/v1/Trunks/TKfae10...../OriginationUrls" -d "Priority=10" -d "FriendlyName=Org1" -d "Enabled=true" -d "SipUrl=sip:test#domain.com" -d "Weight=10" -u '{twilio account sid}:{twilio auth token}'

Kolla AIO deploy fail: Hostname has to resolve IP address while starting rabbitMQ container?

I'm trying to deploy Kolla in AIO.
I build images using the command:
kolla-build -p default -b ubuntu -t binary
I am deploying it in my local system.
I'm using ubuntu 16.04, built images. I'm not using a local registry.
kolla-ansible precheck runs fine
kolla-ansible deploy gives me an error while starting rabbitmq
My host name is DESKTOP
The output of hosts file
cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 DESKTOP
::1 ip6-localhost ip6-loopback <br>
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
The error is as follows
TASK: [rabbitmq | fail msg="Hostname has to resolve to IP address of api_interface"] ***
failed: [localhost] => (item={u'cmd': [u'getent', u'ahostsv4', u'DESKTOP'], u'end': u'2017-02-26 00:45:10.399323', u'stderr': u'', u'stdout': u'127.0.1.1 STREAM DESKTOP\n127.0.1.1 DGRAM \n127.0.1.1 RAW ', u'changed': False, u'rc': 0, 'item': 'localhost', u'warnings': [], u'delta': u'0:00:00.001585', 'invocation': {'module_name': u'command', 'module_complex_args': {}, 'module_args': u'getent ahostsv4 DESKTOP'}, 'stdout_lines': [u'127.0.1.1 STREAM DESKTOP', u'127.0.1.1 DGRAM ', u'127.0.1.1 RAW '], u'start': u'2017-02-26 00:45:10.397738'}) => {"failed": true, "item": {"changed": false, "cmd": ["getent", "ahostsv4", "DESKTOP"], "delta": "0:00:00.001585", "end": "2017-02-26 00:45:10.399323", "invocation": {"module_args": "getent ahostsv4 DESKTOP", "module_complex_args": {}, "module_name": "command"}, "item": "localhost", "rc": 0, "start": "2017-02-26 00:45:10.397738", "stderr": "", "stdout": "127.0.1.1 STREAM DESKTOP\n127.0.1.1 DGRAM \n127.0.1.1 RAW ", "stdout_lines": ["127.0.1.1 STREAM DESKTOP", "127.0.1.1 DGRAM ", "127.0.1.1 RAW "], "warnings": []}}
msg: Hostname has to resolve to IP address of api_interface
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/ravichandran/site.retry
localhost : ok=84 changed=11 unreachable=0 failed=1
Please help. Also let me know if additional information is required.
Make sure your hosts file has a mapping of DESKTOP to 127.0.0.1. Your formatting is a little off but this appears to be what's missing.

Can i use replica set name to connect via mongo-connector

I would like to know, is there a way we can replicate from one mongo replica set to another via mongo-connector? As per mongo documentation we can connect two mongo instances via mongo-connector by using a command as in the example below, but I would like to pass replica set name or use a configuration file instead of passing server:port name in command line.
Mongo Connector can replicate from one MongoDB replica set or sharded cluster to another using the Mongo DocManager. The most basic usage is like the following:
mongo-connector -m localhost:27017 -t localhost:37017 -d mongo_doc_manager
I also tried config.json option by creating below config.json file but it has failed.
{
"__comment__": "Configuration options starting with '__' are disabled",
"__comment__": "To enable them, remove the preceding '__'",
"mainAddress": "localhost:27017",
"oplogFile": "C:\Dev\mongodb\mongo-connector\oplog.timestamp",
"verbosity": 2,
"continueOnError": false,
"logging": {
"type": "file",
"filename": "C:\Dev\mongodb\mongo-connector\mongo-connector.log",
"__rotationWhen": "D",
"__rotationInterval": 1,
"__rotationBackups": 10,
"__type": "syslog"
},
"docManagers": [
{
"docManager": "mongo_doc_manager",
"targetURL": "localhost:37010",
"__autoCommitInterval": null
}
]
}
yes its possible to connect to a replica set or a shard server using mongo connector.
{
mongo-connector -m <mongodb server hostname>:<replica set port> \
-t <replication endpoint URL, e.g. http://localhost:8983/solr> \
-d <name of doc manager, e.g., solr_doc_manager>
}
you can also also pass a connection string to the mongo-connector such as
{
mongo connector -m mongodb://db1.example.net,db2.example.net:2500/?replicaSet=test&connectTimeoutMS=300000
}
to specify specifc config files you can use
{ mongo-connector -c config.json }
where config.json is your config file.
I'm able to resolve my issue by entering backslash '\' for my windows directory path.Here is my updated config file for reference. Thanks to ShaneHarveyNot able to use Configuration file for connecting to mongo-connector
{
"__comment__": "Configuration options starting with '__' are disabled",
"__comment__": "To enable them, remove the preceding '__'",
"mainAddress": "localhost:27017",
"oplogFile": "C:\\Dev\\mongodb\\mongo-connector\\oplog.timestamp",
"noDump": false,
"batchSize": -1,
"verbosity": 2,
"continueOnError": false,
"logging": {
"type": "file",
"filename": "C:\\Dev\\mongodb\\mongo-connector\\mongo-connector.log",
"__format": "%(asctime)s [%(levelname)s] %(name)s:%(lineno)d - %(message)s",
"__rotationWhen": "D",
"__rotationInterval": 1,
"__rotationBackups": 10,
"__type": "syslog",
"__host": "localhost:27017"
},
"docManagers": [
{
"docManager": "mongo_doc_manager",
"targetURL": "localhost:37017",
"__autoCommitInterval": null
}
]
}

Centrifuge not using MongoDB?

I just installed centrifuge (https://centrifuge.readthedocs.org/en/latest/) and created a configuration.json file and placed it in /var/www/ folder.
When I try to run centrifuge centrifuge config = /var/www/configuration.json, the server starts. However when I go to the default path http://localhost:8000 in the admin panel it keeps saying DataStructure used as SQLite.
Here's my configuration.json file
{
"password": "admin",
"cookie_secret": "secret",
"api_secret": "secret",
"structure": {
"storage": "centrifuge.structure.mongodb",
"settings": {
"host": "localhost",
"port": 27017,
"name": "centrifuge",
"pool_size": 10
}
},
state: null
}
I checked and the MongoDB server is running on port 27017.
It seems you are starting Centrifuge using incorrect command line arguments. Try copy and paste into your terminal:
centrifuge --config=/var/www/configuration.json