HAProxy 1.8 - Passing socket connection during HAProxy soft reload - sockets

I am using kubernetes load-lanacer(Here the haproxy configuration is written in every 10s and restarted). Since I want to pass the socket connection while reloading the HAProxy, I changed the Dockerfile of the HAProxy such that it uses HAProxy 1.8-dev2 version. The image used is haproxytech/haproxy-ubuntu:1.8-dev2. Also I added the following line under the global section of the template.cfg file(This is the template in which the HAProxy configuration is written)
stats socket /var/run/haproxy/admin.sock mode 660 level admin expose-fd listeners
Also I changed the reload command in haproxy_reload file as follows
haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -x /var/run/haproxy/admin.sock -sf $(cat /var/run/haproxy.pid)
Once I run the docker image I get the following error.(kubectl create -f rc.yaml --namespace load-balancer)
W1027 07:13:37.922565 5 service_loadbalancer.go:687] Requeuing kube-system/kube-dns because of error: error restarting haproxy -- [WARNING] 299/071337 (21) : We didn't get the expected number of sockets (expecting 1347703880 got 0)
[ALERT] 299/071337 (21) : Failed to get the sockets from the old process!
: exit status 1
FYI:
I commented the stats socket line in the template.cfg file and ran the docker image to verify whether the restart command identifies the socket. The same error occurred. Seems like the soft restart command doesn't identify the stats socket created by the HAProxy.

Related

Failed to infer CIDR network for mon ip

I follow the instructions to bootstrap a new Ceph (I'm new to Ceph) cluster.
I got the following error:
sudo cephadm bootstrap --mon-ip <mon-ip>
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/podman) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: e08484be-72c1-11ea-a13e-0050563f093a
INFO:cephadm:Verifying IP *<mon-ip>* port 3300 ...
INFO:cephadm:Verifying IP *<mon-ip>* port 6789 ...
ERROR: Failed to infer CIDR network for mon ip *<mon-ip>*; pass --skip-mon-network to configure it later
What does it mean ? How to fix it ?
cephadm is still fairly new. I've tracked a few days ago in:
https://tracker.ceph.com/issues/44828
Please run
ceph config set mon public_network <mon_network>
after bootstrap finished.
Is this the exact command you ran?
sudo cephadm bootstrap --mon-ip *<mon-ip>*
If so you actually need to replace *<mon-ip>* with the actual IP address that you want the monitor daemon to listen on.
For future reference, on that page, any command you see that has a variable surrounded by asterisks is something you would need to replace with an address/host/hostname etc. that applies to your environment.

Cloud Code for VisualStudio Code Errors on Cloud Code: Deploy

I've been trying to setup Cloud Code with VSCode and I've been running in to problems when starting the deploy process with Cloud Code: Deploy.
I've tried deploying the samples, python-hello-world-1 as well as the go-hello-world-1, to my kubernetes cluster on GKE but always end up getting errors when the deploy process starts package downloading:
Go Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 49869 --filename skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 49869
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- go-hello-world -> gcr.io/abx-lernende/go-hello-world:latest
Checking cache...
- go-hello-world: Not found. Building
Building [go-hello-world]...
Sending build context to Docker daemon 57.86kB
Step 1/8 : FROM golang:1.13
---> 6586e3d10e96
Step 2/8 : RUN go get -u -v github.com/go-delve/delve/cmd/dlv
---> Running in b75ce8e5dae9
[91mgithub.com/go-delve/delve (download)
[0m[91m# cd .; git clone -- https://github.com/go-delve/delve /go/src/github.com/go-delve/delve
Cloning into '/go/src/github.com/go-delve/delve'...
fatal: unable to access 'https://github.com/go-delve/delve/': Failed to connect to github.com port 443: Connection refused
package github.com/go-delve/delve/cmd/dlv: exit status 128
[0mfailed to build: build failed: building [go-hello-world]: build artifact: unable to stream build output: The command '/bin/sh -c go get -u -v github.com/go-delve/delve/cmd/dlv' returned a non-zero code: 1
Exited with code 1.
Python Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 50185 --filename
skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 50185
Skaffold &{Version:v1.3.1 ConfigVersion:skaffold/v2alpha3 GitVersion: GitCommit:6ba887a42438d1da578a005cf550e618fee6dfb8 GitTreeState:clean BuildDate:2020-01-31T19:55:18Z GoVersion:go1.13.4 Compiler:gc Platform:windows/amd64}
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- python-hello-world -> Tags generated in 0s
gcr.io/abx-lernende/python-hello-world:latest
Checking cache...
- python-hello-world: Cache check complete in 6.0001ms
Not found. Building
Building [python-hello-world]...
Sending build context to Docker daemon 4.608kB
Step 1/7 : FROM python:3.8
---> efdecc2e377a
Step 2/7 : WORKDIR /app
---> Using cache
---> a131b81cad66
Step 3/7 : COPY requirements.txt .
---> Using cache
---> 4625ef1862bd
Step 4/7 : RUN pip install --trusted-host pypi.python.org -r requirements.txt
---> Running in 4da23a158ae3
[91mWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f17ba9c9d60>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/flask/
Im assuming this is due to me being behind a corporate proxy. As counter measures I have explicitly configured VSCode, Git, pip, go and google cloud sdk all to use said proxy. On top of that I set the Windows ENV variables for the proxy. sadly without success.
Thanks!
You can configure docker to pass through proxy information into the containers by adding something like the following to your ~/.docker/config.json:
{
"proxies": {
"default": {
"httpProxy": "http://192.168.1.12:3128",
"httpsProxy": "http://192.168.1.12:3128"
}
}
}
Docker will set the HTTP_PROXY/HTTPS_PROXY environment variables within the container which is picked up by many tools.

Permission denied in omkafka module of rsyslog

I am trying to publish messages from rsyslog to kafka on a remote machine using omkafka module.
My omkafka action is configured as:
if $HOSTNAME == 'localhost' then {
action(type="omkafka"
name="log_kafka"
broker="192.168.100.50:9092"
topic="rsyslog_kafka"
errorfile="/var/log/omkafka/log_kafka_failures.log"
template="hostipFormat"<br/>
)
}
My kafka instance is running fine and I am able to publish data using kafka-producer.bat file from another windows machine.
But when I start my rsyslog service, I get following error:
Feb 17 16:42:01 localhost rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="1764" x-info="http://www.rsyslog.com"] start
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 192.168.100.50:9092/bootstrap: Failed to connect to broker at 192.168.100.50:9092: Permission denied [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 1/1 brokers are down [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 192.168.100.50:9092/bootstrap: Failed to connect to broker at 192.168.100.50:9092: Permission denied [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 1/1 brokers are down [v8.24.0 try http://www.rsyslog.com/e/2422 ]
I am not sure whether this is related to omkafka or librdkafka.
Need help.
I had the same issue. Instead of disabling SELinux and thus opening yourself up to a world of hurt. I used audit2why which tells you exactly why something is being denied in their avc denials. It is helpful as well in that it can tell you just what you need to do to fix the problem.
Audit2why reads /var/log/audit/audit.log and then tells you why something is being denied, and sometimes can tell you what you need to do to fix an issue. In my case it was
type=AVC msg=audit(1492149030.280:296487): avc: denied { name_connect } for pid=2277 comm=72733A6D61696E20513A526567 dest=9092 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
Was caused by:
The boolean nis_enabled was set incorrectly.
Description:
Allow nis to enabled
Allow access by executing:
# setsebool -P nis_enabled 1
The final commented line there is exactly what needs to be typed to allow this to execute, and it can be done without disabling proper security controls. After sudo setsebool -P nis_enabled 1 was executed I restarted rsyslog and kafka was able to consume my messages just fine.
I got the reason of this issue. It happened because of SELINUX in centos. Once I disabled the SELINUX service, the configuration is working fine.
Definitely this is related to SELinux, but because disabling SELinux is not best choice I correct problem whit this:
sudo semanage port -d -t unreserved_port_t -p tcp 9092
sudo semanage port -a -t http_port_t -p tcp 9092
And the restart syslogd.

Orion Context Broker: reset by peer when calling updateContext

Just after the install of the Context Broker I've tried to test it creating a new entity as described in the session Entity Creation of https://forge.fi-ware.org/plugins/mediawiki/wiki/fiware/index.php/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_User_and_Programmers_Guide, but I'm getting a "connection reset by peer" error.
The log doesn't seem say anything, even I raised the level of traces with -t 0-255 option.
Aditional info:
$ contextBroker --version
0.14.0
$ ps aux | grep context
/usr/bin/contextBroker -port 1026 -logDir /var/log/contextBroker -pidpath /var/log/contextBroker/contextBroker.pid -dbhost localhost -db orion -t 0-255
The issue was fixed updating the Context Broker to a newer version, in my case to version 0.14.1, which can be found here: http://repositories.testbed.fi-ware.org/repo/rpm/x86_64/.
I can't say exactly what was wrong, since after updating everything was working fine.

python-memcache memcached -- I installed on centos virtualbox but it get/set never seem to work

I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)