I have two questions. My Immediate problem is WAZUH-AGENT never connects to WAZUH-MANAGER
A. That makes me think, While installing Wazuh Manager, where do we provide WAZUH MANAGER IP?
B. I registered Windows and RHEL machines as agents but none of them are able to connect - all agents are NEVER CONNECTED status.
From windows , it is the error . I am using port#1515 and TCP
ERROR: (1216): Unable to connect to 'xx.xxx.105.75': 'A connection
attempt failed because the connected party did not properly respond
after a period of time, or established connection failed because
connected host has failed to respond.'
I even tried changing 1515 to 1519 from Kibana-Wazuh app. And added my Agent IP in white-list, not sure if that matters.
Answering your questions according to the current version of wazuh v3.13.1 as of today:
[A] While installing Wazuh Manager, where do we provide WAZUH MANAGER IP?
In the installation of the manager you don't have to configure any IP unless you are configuring the cluster mode. WAZUH MANAGER IP is necessary to configure it in the agents.
After installing the agent, you have to:
Add the manager's ip address in the configuration file /var/ossec/etc/ossec.conf
<address>MANAGER_IP</address>
Register the agent in the manager. The simplest method is
/var/ossec/bin/agent-auth -m MANAGER_IP
Restart the wazuh agent
systemctl restart wazuh-agent
Once these steps are applied, you should have your agent connected and reporting to the manager.
[B] I registered Windows and RHEL machines as agents but none of them are able to connect - all agents are NEVER CONNECTED status.
After having performed the steps mentioned above, you should have connection of the agents with the manager. If not, then a troubleshooting process must be followed.
Check that the agent has successfully registered in the manager. You can use the command /var/ossec/bin/agent_control -l and see if the manager has the agent registered.
Check that you have a connection to the manager from the agents.
Wazuh uses by default ports 1515/TCP for registration and 1514/UDP for communication. Check that you have a connection through these ports (check firewall rules ...)
To avoid possible problems, check that your manager's version is >= that the agent's version.
Check if there has been an error in /var/ossec/logs/ossec.log file.
I hope this information is helpful to you.
Best regards.
A.You will have to edit ossec.conf file and make sure you have the MANAGER_IP address put it right place.
B.After you complete the section A. and if 1514/1515 ports are opened, you will be seeing your agent on the manager. Do not forget to register your aget to the manager.
I Think there have two steps:
1.To edit ossec.conf in agent. to change the 'MANAGER_IP' to real manager IP. This is very import and it's very easy to forget to edit it.
2.Restart the Agent.
Related
SUMMARY
I have installed zabbix on OpenShift cluster. I am trying to monitor a host(vm) outside the cluster but the zabbix server is unable to connect to it. In the /etc/zabbix/zabbix_agentd.conf file I have mentioned the DNS name of the server zabbix-server but it looks like there server is trying to connect through a different public IP. I am not sure what this IP is.
OS / ENVIRONMENT / Used docker-compose files
I applied the kubernetes.yaml file present in this repo - https://github.com/zabbix/zabbix-docker/blob/6.2/kubernetes.yaml - on an OpenShift cluster.
CONFIGURATION
In the /etc/zabbix/zabbix_agentd.conf file Server=zabbix-server.
STEPS TO REPRODUCE
Apply the kubernetes.yaml file on Openshift cluster and try to monitor any external vm.
EXPECTED RESULTS
The zabbix server should be able to connect to the vm.
ACTUAL RESULTS
Zabbix server logs.
Defaulted container "zabbix-server" out of: zabbix-server, zabbix-snmptraps
\*\* Updating '/etc/zabbix/zabbix_server.conf' parameter "DBHost": 'mysql-server'...added
287:20230120:060843.131 Zabbix agent item "system.cpu.load\[all,avg5\]" on host "Host-C" failed: first network error, wait for 15 seconds
289:20230120:060858.592 Zabbix agent item "system.cpu.num" on host "Host-C" failed: another network error, wait for 15 seconds
289:20230120:060913.843 Zabbix agent item "system.sw.arch" on host "Host-C" failed: another network error, wait for 15 seconds
289:20230120:060929.095 temporarily disabling Zabbix agent checks on host "Host-C": interface unavailable
Logs from the agent installed on the vm.
350446:20230122:103232.230 failed to accept an incoming connection: connection from "9.x.x.219" rejected, allowed hosts: "zabbix-server"
350444:20230122:103332.525 failed to accept an incoming connection: connection from "9.x.x.219" rejected, allowed hosts: "zabbix-server"
350445:20230122:103432.819 failed to accept an incoming connection: connection from "9.x.x.210" rejected, allowed hosts: "zabbix-server"
350446:20230122:103533.114 failed to accept an incoming connection: connection from "9.x.x.217" rejected, allowed hosts: "zabbix-server"
If I add this IP in /etc/zabbix/zabbix_agentd.conf it will work. But what IP is this? Is this a service? Or any node/pod IP? It keeps on changing. Everytime I cannot change this id in the conf file. I need something more stable.
Kindly help me out with this issue.
So I don't know zabbix. So I have to make some educated guesses both in how the agent works and how the server works.
But, to summarize, unlike something like docker compose where you are running the zabbix server on a known server, in Openshift/Kubernetes you are deploying into a cluster of machines with their own networking. In other words, the whole point of OpenShift is that OpenShift will control where the application's pod gets deployed and will relocate/restart that pod as needed. With a different IP every time. (And the DNS name is meaningless since the two systems aren't sharing DNS anyway.) Most likely the IP's you are seeing are the pod's randomly assigned IPs.
So, what are you to do when you have a situation like yours where an external application requires a predicable IP? Well, option 1, is to remove that requirement. Using something like a certificate is obviously more secure and more reliable than depending on an IP anyway. But another option is to use an egress IP. This is a feature of OpenShift where you essentially use a proxy to provide an external application with a consistent IP.
I am having problem to access a node js rest service deployed on an ubuntu virtual machine. I am able to access the VM using putty, however I am not able to ping the reserve ip from command line. I have put the logs in the rest service as when it gets the hit it prints the log, the logs are not getting printed. I want to know if there is any additional setting which needs to be done to open a port from the virtual machine or it is supposed to be open by default. If I need to open the port in order to access the service, where should I look for it.
Thanks & Regards
I want to know if there is any additional setting which needs to be
done to open a port from the virtual machine or it is supposed to be
open by default.
No, Azure will not open other ports by default, we should open ports manually.
I am not able to ping the reserve ip from command line
It is a classic VM, am I right? if so, we should make sure the rest service listening on which port, and add endpoints via Azure portal:
More information about create an endpoint please refer to this link.
If your VM in ARM module, we should add a inbound rules to NSG.
More information about NSG, please refer to this link.
I've recently upgraded an on premise Service Fabric cluster to 5.1.156.9590 running on Windows Server 2012 R2. I removed the original cluster and created a new one. Unfortunately my new cluster doesn't seem able to create the firewall rules for any ports specified in the service manifests. The only warning I see that seems connected is this from the ServiceFabric Hosting:
Did not enable firewallpolicy for current profile 1
I can't find any help regarding this message. I'm wondering if something has changed as regards specifying ports for a service or there's something on the node boxes that I haven't configured correctly.
Any pointers appreciated as I'm sure I didn't have to open them manually previously.
Unfortunately, I had the same problem and didn't succeed in reanimation of Service Fabric. Only moving to Azure helped to resolve the issue. You can try next things:
Completely remove SF, SDK, related services, firewall rules and directory SFData on C drive, then install everything again.
Check that firewall is enabled
Check that SF service is created and able to start
If the service is not able to start, check events, there is a special folder for SF where you can find additional and detailed infromation
If you have customized the address on which listeners will listen in SF cluster config, try to change it on something different and see if SF would deploy or not
This is probably a very basic question for you, but I'm just getting into consul and for testing purposes, I wanna run multiple servers on my PC. For example, I run the first server with
consul agent -server -bootstrap-expect=1 -dc=dev -data-dir=/tmp/consul -ui-dir="c:/consul 0.5.2/dist"
and then I try to run the second server with
consul agent -server -data-dir=/tmp/consul2 -dc=dc2
but it returns
==> Error starting agent: Failed to start Consul server: Failed to start RPC lay
er: listen tcp 0.0.0.0:8300: bind: Only one usage of each socket address (protoc
ol/network address/port) is normally permitted.
What am I missing from my command?
You are launching two consul servers using mostly default values. In this case the problem is that you use default ports.
When you read the error message you will notice that your second consul server tries to bind to port 8300. But your first server is already using this port, causing the second server to fail at startup. (note: consul binds to a variety of ports, each having another purpose and default setting. Take a look at the documentation).
As suggested by LenW, you can use Vagrant to set your environment. You could follow the consul tutorial.
If you do not want to use vagrant or set up any virtual machines on your own. You could change the defaults of the second server.
If you are trying to simulate a production topology on your dev machine I would look at using Vagrant in combination with VirtualBox to simulate a couple of machines for testing.
How do I remotely pull configuration information from a running bind name server without logging in as root on the server where it is running?
I searched a lot and read many materials about BIND9 but still no answers.
I know there are some commands to conduct zone transfer or update zone resource data, but I didn't find any way to pull configuration info from a name server.
In short: you cannot. There is no provision in the DNS protocol to send server configuration. So whatever technology you use, it will NOT be DNS. And since Bind9 is designed to serve DNS requests and send DNS replies only, Bind9 cannot be coerced to send its configuration the way you'd expect.
You have to install and configure some other piece of software to be able to access the configuration. SSH is one of the most widespread such technology used for managing server configurations.
You could use "rndc -s dns-server dumpdb".
In named's configuration you point dump-file to a shared folder which is accessible from the system that ran rndc.