I am trying to publish messages from rsyslog to kafka on a remote machine using omkafka module.
My omkafka action is configured as:
if $HOSTNAME == 'localhost' then {
action(type="omkafka"
name="log_kafka"
broker="192.168.100.50:9092"
topic="rsyslog_kafka"
errorfile="/var/log/omkafka/log_kafka_failures.log"
template="hostipFormat"<br/>
)
}
My kafka instance is running fine and I am able to publish data using kafka-producer.bat file from another windows machine.
But when I start my rsyslog service, I get following error:
Feb 17 16:42:01 localhost rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="1764" x-info="http://www.rsyslog.com"] start
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 192.168.100.50:9092/bootstrap: Failed to connect to broker at 192.168.100.50:9092: Permission denied [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 1/1 brokers are down [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 192.168.100.50:9092/bootstrap: Failed to connect to broker at 192.168.100.50:9092: Permission denied [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 1/1 brokers are down [v8.24.0 try http://www.rsyslog.com/e/2422 ]
I am not sure whether this is related to omkafka or librdkafka.
Need help.
I had the same issue. Instead of disabling SELinux and thus opening yourself up to a world of hurt. I used audit2why which tells you exactly why something is being denied in their avc denials. It is helpful as well in that it can tell you just what you need to do to fix the problem.
Audit2why reads /var/log/audit/audit.log and then tells you why something is being denied, and sometimes can tell you what you need to do to fix an issue. In my case it was
type=AVC msg=audit(1492149030.280:296487): avc: denied { name_connect } for pid=2277 comm=72733A6D61696E20513A526567 dest=9092 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
Was caused by:
The boolean nis_enabled was set incorrectly.
Description:
Allow nis to enabled
Allow access by executing:
# setsebool -P nis_enabled 1
The final commented line there is exactly what needs to be typed to allow this to execute, and it can be done without disabling proper security controls. After sudo setsebool -P nis_enabled 1 was executed I restarted rsyslog and kafka was able to consume my messages just fine.
I got the reason of this issue. It happened because of SELINUX in centos. Once I disabled the SELINUX service, the configuration is working fine.
Definitely this is related to SELinux, but because disabling SELinux is not best choice I correct problem whit this:
sudo semanage port -d -t unreserved_port_t -p tcp 9092
sudo semanage port -a -t http_port_t -p tcp 9092
And the restart syslogd.
Related
I follow the instructions to bootstrap a new Ceph (I'm new to Ceph) cluster.
I got the following error:
sudo cephadm bootstrap --mon-ip <mon-ip>
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/podman) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: e08484be-72c1-11ea-a13e-0050563f093a
INFO:cephadm:Verifying IP *<mon-ip>* port 3300 ...
INFO:cephadm:Verifying IP *<mon-ip>* port 6789 ...
ERROR: Failed to infer CIDR network for mon ip *<mon-ip>*; pass --skip-mon-network to configure it later
What does it mean ? How to fix it ?
cephadm is still fairly new. I've tracked a few days ago in:
https://tracker.ceph.com/issues/44828
Please run
ceph config set mon public_network <mon_network>
after bootstrap finished.
Is this the exact command you ran?
sudo cephadm bootstrap --mon-ip *<mon-ip>*
If so you actually need to replace *<mon-ip>* with the actual IP address that you want the monitor daemon to listen on.
For future reference, on that page, any command you see that has a variable surrounded by asterisks is something you would need to replace with an address/host/hostname etc. that applies to your environment.
I am using kubernetes load-lanacer(Here the haproxy configuration is written in every 10s and restarted). Since I want to pass the socket connection while reloading the HAProxy, I changed the Dockerfile of the HAProxy such that it uses HAProxy 1.8-dev2 version. The image used is haproxytech/haproxy-ubuntu:1.8-dev2. Also I added the following line under the global section of the template.cfg file(This is the template in which the HAProxy configuration is written)
stats socket /var/run/haproxy/admin.sock mode 660 level admin expose-fd listeners
Also I changed the reload command in haproxy_reload file as follows
haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -x /var/run/haproxy/admin.sock -sf $(cat /var/run/haproxy.pid)
Once I run the docker image I get the following error.(kubectl create -f rc.yaml --namespace load-balancer)
W1027 07:13:37.922565 5 service_loadbalancer.go:687] Requeuing kube-system/kube-dns because of error: error restarting haproxy -- [WARNING] 299/071337 (21) : We didn't get the expected number of sockets (expecting 1347703880 got 0)
[ALERT] 299/071337 (21) : Failed to get the sockets from the old process!
: exit status 1
FYI:
I commented the stats socket line in the template.cfg file and ran the docker image to verify whether the restart command identifies the socket. The same error occurred. Seems like the soft restart command doesn't identify the stats socket created by the HAProxy.
Chef Server Version : chef-server 12.11.1
knife softlayer server create
--image-id ${image_id}
--ssh-keys ${ssh_keys}
--hostname $node_name
--network-interface-speed 100
--domain $domain_name
--cores ${cores}
--ram ${ram}
--datacenter ${datacenter}
--node-name $node_name
--vlan $public_vlan
--private-vlan $private_vlan
--use-private-network
-x root
-i $USER_HOME/.ssh/id_rsa -VV
Client Output
Launching SoftLayer VM, this may take a few minutes.
............................................................................
............................................................................
................
After 6 minutes it throws this error
ERROR: Excon::Error::Socket: Connection reset by peer (Errno::ECONNRESET)
ERROR: Excon::Error::Socket: Connection reset by peer (Errno::ECONNRESET)
The Softlayer's API has this issue where sometimes the server resets the connection to the client. Currently they are working on a fix but there is not an ETA. The issue showed up long time ago. I only can recommend to try catch the error and try again.
Just after the install of the Context Broker I've tried to test it creating a new entity as described in the session Entity Creation of https://forge.fi-ware.org/plugins/mediawiki/wiki/fiware/index.php/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_User_and_Programmers_Guide, but I'm getting a "connection reset by peer" error.
The log doesn't seem say anything, even I raised the level of traces with -t 0-255 option.
Aditional info:
$ contextBroker --version
0.14.0
$ ps aux | grep context
/usr/bin/contextBroker -port 1026 -logDir /var/log/contextBroker -pidpath /var/log/contextBroker/contextBroker.pid -dbhost localhost -db orion -t 0-255
The issue was fixed updating the Context Broker to a newer version, in my case to version 0.14.1, which can be found here: http://repositories.testbed.fi-ware.org/repo/rpm/x86_64/.
I can't say exactly what was wrong, since after updating everything was working fine.
I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)