consul - connect client to server - service

I'm new at consul and I try to setup a server-client environment. I have started my server with the following command and configuration:
consul.exe agent -ui -config-dir=P:\Consule\config
The config file looks the following ("P:\Consule\config\server.json")
{
"bootstrap": false,
"server": true,
"datacenter": "MyServices",
"data_dir": "P:\\Consule\\data",
"log_level": "INFO"
}
Output when I start consul from commandline with above command:
==> Starting Consul agent...
==> Consul agent running!
Version: 'v0.8.3'
Node ID: '1a244456-e725-44be-0549-33603ea7087d'
Node name: 'MYCOMPUTERNAMEA'
Datacenter: 'myservices'
Server: true (bootstrap: false)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>
Now, at another computer in my domain I try to run an consul client with follwoing commandline and config-file:
consul.exe agent -config-dir C:\Consul -bind=127.0.0.1
Config file ("C:\Consul\client.json")
{
"server": false,
"datacenter": "MyServices",
"data_dir": "C:\\TEMP",
"log_level": "INFO",
"start_join": ["MYCOMPUTERNAMEA"]
}
But I always get follwing output/error message:
==> Starting Consul agent...
==> Joining cluster...
==> 1 error(s) occurred:
* Failed to join <IP_OF_MYCOMPUTERNAMEA>: dial tcp <IP_OF_MYCOMPUTERNAMEA>:8301: connectex: No connection could be made because the target machine actively refused it.
Does anyone know what I'm doing wrong?
Thanks and best regards

I suppose, the reason is that your server is available only for 127.0.0.1 ip-address, which is localhost ip and available only from the same server. This can be seen here:
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
You have to configure your server, to make it listening all network interfaces or some specific interface, which have to be available from other server.
Try to run it with the client and advertise options set to 0.0.0.0 (or some specific ip). Read about it here and here.
And you might have to delete -bind=127.0.0.1 from the client configuration, since it might be available from the server too.

Related

Can I deploy FIWARE IDAS communicating with an Orion Context Broker instance running on a different virtual machine from IDAS?

I want to deploy an IDAS (FIWARE Backend Device Manager) i.e. IOTA instance that will communicate and send data to an already existing Orion Context Broker instance running in a different virtual machine from the one hosting IDAS. Is that possible? Or is it necessary for the two services to be in the same virtual machine?
I am using an IoTAgent-JSON (I think it's 1.6.2 version) for MQTT transport.
This is the config.js file (I have already replaced the contextBroker host with the host of my Orion Context Broker, as you can see, it was "localhost" before):
var config = {};
config.mqtt = {
host: 'localhost',
port: 1883,
thinkingThingsPlugin: true
};
config.iota = {
logLevel: 'DEBUG',
timestamp: true,
contextBroker: {
host: '147.27.60.182',
port: '1026'
},
server: {
port: 4041
},
deviceRegistry: {
type: 'mongodb'
},
mongodb: {
host: 'localhost',
port: '27017',
db: 'iotagentjson'
},
types: {},
service: 'howtoService',
subservice: '/howto',
IoTA endpoints:
http://147.27.60.202:5351/iot/services
(Fiware-Service: openiot, Fiware-ServicePath: /, X-Auth-Token: [TOKEN])
http://147.27.60.202:4041/iot/devices/
(Fiware-Service: tourguide, Fiware-ServicePath: /)
My Orion Context Broker (where I want to send data) endpoint:
http://147.27.60.182:1026/v2
P.S.: I have tried to change the mongodb host, too.
Image: how the service runs
Yes, you can have Orion and one or more IOTAs running in different virtual machine. The only requirement is mutual interconnection, I mean IOTA needs access to the Orion endpoint and Orion needs access to IOTA endpoint.
Check contextBroker.url (alternatively contextBroker.host and contextBroker.port) and providerUrl IOTA configuration parameters in IOTA documentation.

MongoDB replicaset TLS/SSL

I have launched a MongoDB 4 replica-set on 3 servers by private IP successfully. Now I wanna bind another IP and it needs enabling TLS/SSL.
I have created PEMKeyFile and CAFile and copied these file s on all 3 servers and added the codes below to mongod.config file of all 3 servers.
# network interfaces
net:
port: 27017
bindIp: 10.10.20.21,5.22.25.45 # example private ip and one example valid IP
ssl:
mode: requireSSL
PEMKeyFile: /opt/mongo/mongo.pem
PEMKeyPassword: MyPassword
CAFile : /opt/mongo/CA.pem
allowInvalidCertificates: true
allowInvalidHostnames: true
security:
keyFile: /opt/mongo/mongo-keyfile
I got error
E STORAGE [initandlisten] Failed to set up listener: SocketException: Cannot assign requested address
I CONTROL [initandlisten] now exiting
I CONTROL [initandlisten] shutting down with code:48
What is wrong with it? How can I fix it?
should I see both of these IPs
yes, of course.
bindIp tells mongodb service which system network interfaces to listen on. These are local system interfaces, not the clients. As soon as mongobd bind to an interface, clients from anywhere can connect to this IP:
binding to 10.10.20.XXX : the private class A network interface allows clients to connect from any 10.XXX.XXX.XXX IP within the same network
binding to 5.22.25.XXX : the public network interface allows clients to connect from anywhere in the internet.
If you want to restrict access to mongodb and allow to connect from specific IPs/networks only, you need to enable authentication and apply the restriction to the user or a group: https://docs.mongodb.com/manual/reference/method/db.createUser/#authentication-restrictions.
E.g.
use admin
db.createUser(
{
user: "remote-client",
pwd: "password",
roles: [ { role: "readWrite", db: "reporting" } ],
authenticationRestrictions: [ {
clientSource: ["5.22.25.45"]
} ]
}
)
Will allow mongo -u remote-client -p password to connect from IP 5.22.25.45 only and permit reads and writes to/from "reporting" database.

Gitlab behind NAT on an alternative port?

This is a fresh install on Ubuntu 16.04.
I have been able to change the port and edit the "/etc/gitlab/gitlab.rb" file.
changes;
external_url 'http://superawesomedomain.com:2345'
nginx['listen_port'] = 2345
nginx['proxy_set_headers'] = {
"Host" => "$http_host",
"X-Real-IP" => "$remote_addr",
"X-Forwarded-For" => "$proxy_add_x_forwarded_for",
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on"
}
When I try to access Gitlab from the browser, I get a 502 error "Whoops, GitLab is taking too much time to respond."
And this in the logs:
==> /var/log/gitlab/nginx/gitlab_error.log <== 2016/05/04 00:43:53 [error] 1599#0: *14 connect() to
unix:/var/opt/gitlab/gitlab-workhorse/socket failed (111: Connection
refused) while connecting to upstream, client: xxx.xxx.xxx.xxx, server:
superawesomedomain.com, request: "GET /favicon.ico HTTP/1.1", upstream:
"http://unix:/var/opt/gitlab/gitlab-workhorse/socket:/favicon.ico",
host: "superawesomedomain.com:2345", referrer:
"http://superawesomedomain.com:2345/"
The only ports configured behind NAT to work on this machine are; 2345 and 8080.
What am I missing? Ultimately I would prefer that it be https://superawesomedomain.com:2345/
I was able to get this working by using the IP of the server instead of the URL in the config:
external_url 'http://192.168.0.20:2345'
After doing that, GitLab was accessible from the //superawesomedomain.com:2345/ address. I am not sure why this worked, but it seems this is the only way to get it working with NAT and forwarded ports.

I can't connect from the outside to the mongo-express

I am using the mongo-express.
Installed on AWS EC2, it was started.
$ node app
Mongo Express server listening on port 8081 at localhost
Database connected
Connecting to db...
Database db connected
However, it is not possible to connect from the browser to port 8081.
I can download the index.html of the mongo-express using wget command on ec2.
$ wget http://admin:pass#localhost:8081
--2016-02-22 02:22:25-- http://admin:*password*#localhost:8081/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8081... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Authentication selected: Basic realm="Authorization Required"
Reusing existing connection to localhost:8081.
HTTP request sent, awaiting response... 200 OK
Length: 9319 (9.1K) [text/html]
Saving to: ?index.html?
index.html 0%[ ] 0 --.-KB/s GET / 200 218.468 ms - 9319
index.html 100%[===================>] 9.10K --.-KB/s in 0.04s
2016-02-22 02:22:26 (236 KB/s) - ?index.html? saved [9319/9319]
By the way, port 8081 in the security group of ec2 is open to my IP.
The following settings of config.js is, was the cause
site: { // baseUrl: the URL that mongo express will be located at - Remember to add the forward slash at the stard and end!
baseUrl: '/',
cookieKeyName: 'mongo-express',
cookieSecret: process.env.ME_CONFIG_SITE_COOKIESECRET || 'cookiesecret',
host: process.env.VCAP_APP_HOST || 'localhost',
port: process.env.VCAP_APP_PORT || 8081,
requestSizeLimit: process.env.ME_CONFIG_REQUEST_SIZE || '50mb',
sessionSecret: process.env.ME_CONFIG_SITE_SESSIONSECRET || 'sessionsecret',
sslCert: process.env.ME_CONFIG_SITE_SSL_CRT_PATH || '',
sslEnabled: process.env.ME_CONFIG_SITE_SSL_ENABLED || false,
sslKey: process.env.ME_CONFIG_SITE_SSL_KEY_PATH || '',
},
The value of the host is changed to "0.0.0.0", now to be able to connect from browser to the mongo-express.
In my case, I've got the issue because I wanted to expose my container on another port (4301).
But the express was still listening on 8081.
To fix it, had to specify indeed VCAP_APP_HOST and VCAP_APP_PORT.
And you can directly specify it on the run cmd like:
docker run --network YOUR_NETWORK --name YOUR_MONGO_EXPRESS_CONTAINER_NAME -e ME_CONFIG_MONGODB_SERVER=YOUR_MONGO_SERVER_IP -e VCAP_APP_HOST=0.0.0.0 -e VCAP_APP_PORT=4301 -p 4301:4301 mongo-express

Haproxy 1.6.2 not recognizing resolvers section

As a test, I have a local bind instance running:
>netstat -ant | grep LISTEN
tcp 0 0 10.72.186.23:53 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN
...
>nslookup mysubdomain.example.com 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: mysubdomain.example.com
Address: nn.nn.nn.251
Name: mysubdomain.example.com
Address: nn.nn.nn.249
Name: mysubdomain.example.com
Address: nn.nn.nn.201
Name: mysubdomain.example.com
Address: nn.nn.nn.138
I'm running haproxy 1.6.2 on the same host, with a resolvers section:
resolvers dns
nameserver dns1 127.0.0.1:53
nameserver dns2 10.72.186.23:53
hold valid 10s
It doesn't reject the resolvers section, but doesn't seem to be using it, either. It doesn't show in the stats page, and attempting to add this service command:
server mysubdomain-dev mysubdomain.example.com
causes this error:
>service haproxy restart
* Restarting haproxy haproxy
[ALERT] 322/171813 (10166) : parsing [/etc/haproxy/haproxy.cfg:77] : 'server mysubdomain-dev' : invalid address: 'mysubdomain.example.com' in 'mysubdomain.example.com'
[ALERT] 322/165300 (29751) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 322/165300 (29751) : Fatal errors found in configuration.
The haproxy doc https://cbonte.github.io/haproxy-dconv/configuration-1.6.html indicates this should work.
server <name> <address>[:[port]] [param*]
...
<address> is the IPv4 or IPv6 address of the server. Alternatively, a
resolvable hostname is supported, but this name will be resolved
during start-up. Address "0.0.0.0" or "*" has a special meaning.
Is there some other piece that needs to be added to the haproxy.cfg that activates the resolvers section?
When HAProxy first starts, it attempts to resolve the hostnames of any servers in all the backends to fill the server structures. During this first startup phase, HAProxy uses the OS resolver, i.e. generally the servers defined in your /etc/resolv.conf file.
Only later, when the server's IP addresses are updated during checks, HAProxy uses its internal resolver configuration and its internal DNS resolver.
From your error description, it now seems as if your host itself can not resolve the mysubdomain.example.com hostname. HAProxy will only be able to start if it can resolve the hostnames without an explicit named nameserver. This can be verified with e.g.
dig mysubdomain.example.com
might be you are not specifying the resolvers to use for that server
server mysubdomain-dev mysubdomain.example.com ->
server mysubdomain-dev mysubdomain.example.com resolvers dns