Is a flush required for iptables before adding new rules? - centos

I'm pretty new to networking and am trying to do some simple configuration for a server for LAN access ( SSH & HTTP ) using iptables. I'm using CentOS7 if that matters.
I've been working form tutorials and they seem to suggest as the first step to flush all the existing rules.
I'm working on a new CentOS install and I have a couple terminal windows of rules and I definitely don't know enough to try to restore them if I kill them and I definitely don't know what they do so I'm afraid if I kill them networking issues I don't understand will start happening or I'll open my server to security risks.
In these tutorials they don't bother to explain why flushing the current rules is done.
Am I OK without flushing as long as there's not another rule in place that conflicts with the ones I add at the end?
If I do flush will everything be restored at restart as long as I don't use iptable's save?

Flushing the current rules is not required but sometimes it's better to start with a clean slate. Even if one doesn't want to break the current configuration, it might prove more beneficial to rebuild it entirely from scratch. That is, not just in respect to the simplicity & efficiency of the resulting configuration, but also mentally while trying to come up with the correct rules.
If one chooses to keep the current configuration and build upon it, he should bear in mind that the order of the rules matters. The -I argument can be used to insert new rules into a specific position in a specified chain, as written in iptables man page:
-I, --insert chain [rulenum] rule-specification
Insert one or more rules in the selected chain as the given rule number. So, if the rule number is 1, the rule or rules are inserted at the head of the chain. This is also the default if no rule number is specified.
Before modifying anything, it is advisable to save the current configuration to a file:
iptables-save > <filename>
This file can be used later on to restore the original configuration:
iptables-resotre < <filename>
Rules created with the iptables command are stored in memory. If the system is restarted before explicitly stating otherwise, the current rule set is lost. On CentOS7, this is done by:
service iptables save
The details of this command line can be found here:
This executes the iptables init script, which runs the
/sbin/iptables-save program and writes the current iptables
configuration to /etc/sysconfig/iptables. The existing
/etc/sysconfig/iptables file is saved as
/etc/sysconfig/iptables.save.
The next time the system boots, the iptables init script reapplies
the rules saved in /etc/sysconfig/iptables by using the
/sbin/iptables-restore command.
Note that on CentOS7, firewalld was introduced to manage iptables. This answer explains how the classic iptables setup can be restored.

Related

MySQL workbench: multiple windows

MySQL workbench is a fantastic tool. However, I'm having a hard time figuring out how to create multiple windows. For example, in Sequel Pro (or TablePlus or really any other SQL client) I can have multiple windows:
Yes I know there are 'tabs' but those aren't quite the same thing. Is there a way to have multiple windows using MySQL Workbench?
It seems like from a few other threads this would need to be done manually via:
$ open -n -a MySQLWorkbench.app
MySQL Workbench is originally designed to be a single instance app only. On Windows this has been extended to allow multiple instances (there's a setting in the preferences) and you found a way to do this on macOS. However this bears some risks, because all instances share the same config and cache files and can write simultaneously to them, which is prone to file corruption. Also, any changes done to the configuration or connections end up in the same file, so the last change may override previously made changes in another instance.

Failed to start LSB :Bring Up down Networking [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
I am new to CentOS 7 and I am configuring a static IP on CentOS 7, so I have edited the file /etc/sysconfig/network-scipts/ifcfg-eth0 as following:
TYPE=Ethernet
BOOTPROTO=none
Device=eth0
ONBBOOT=yes
IPADDR=192.168.4.196
NETMASK=255.255.255.0
GATEWAY=192.168.88.254
DNS1=8.8.8.8
USERCTL=no
But when I issue the command
systemctl restart network
I am getting the error
failed to start LSB :/Bring Up down Networking
ip route show gives me no output.
I have applied the solution that stops NetworkManager with the same existing error.
I am able to configure a dynamic DHCP and get a dynamic IP address but not static one.
What can be possible solutions?
Its because of interface issue
Solution worked for me was:
Check the interface available
cp ifcfg-eno16780032 ifcfg-ens192
vi ifcfg-ens192 and change NAME and Device field to ens192
systemctl disable NetworkManager
systemctl status NetworkManager -> inactive
systemctl stop network
systemctl start network
After that check ip a
get the details of IP and able to ping that IP.
You should change BOOTPROTO to static and move your DNS config to your /etc/resolv.conf file, for example:
TYPE=Ethernet
BOOTPROTO=static
PHYSDEV=eth0
ONBBOOT=yes
IPADDR=192.168.4.196
NETMASK=255.255.255.0
GATEWAY=192.168.88.254
USERCTL=no
When facing this issue that derailed proper autossh functionality on my roaming laptop, I decided to rip apart whatever of my MageiaOS code to understand the root cause. I did not have NetworkManager, so knew for sure it was not the obstacle.
The found issue could be described as kind of eventual live-lock between SysV and systemd ways of managing network service. Potentially, many conditions could trigger it (NetworkManager is one of the examples), in my case it was misconfigured vboxnet ifaces from VMWare.
There're two critical blockers in each part of SysV/systemd balance that might start triggering each other in the loop. On SysV side, init.d/network script eventually calls "ifup $device boot", which in response of 'boot' parameter starts ifplugd daemon for pluggable ifaces. The problem with this daemon that despite of '-I' switch (used to ignore errors) it still fails with exit code 4 upon detecting itself in memory. The only proper way to shutdown this daemon from network script is issuing "ifdown $device boot" command, which is supposed to get executed upon stopping network service by 'service' or 'systemctl' commands.
The interesting part of this question: why ifplugd is already in memory before the network service starts? Well, in my case WiFi iface was fired before misconfigured vbox iface but the latter caused entire initscript to fail. So, network was started on boot but service status was recorded as failing. But what prevents us just stopping network service and consequently killing ifplugd from ifdown/boot command? The answer is: systemd in its ingenious ways of handling ExecStop directive in unit file (which is auto-generated on the fly for network service). Basically, "systemctl stop" command just ignores ExecStop directive if it believes that the service is not started. Well, of course it is not because... if previously failed stumbling on unexpected ifplugd instance! So, no way to stop the service, hence no way to get rid of ifplugd, hence no way of (re)starting the service and so on.
Conclusion. There's no single recipe for this sort of trouble because the compatibility balance between network script and systemd approach is very fragile, so many unexpected factors can start interfering. To troubleshoot this scenario, several statuses might be useful:
network service: systemctl status network
ifplugd service: ps ax|grep ifplugd
network link status: ifconfig / iwconfig
autogenerated unit: cat /var/run/systemd/generator.late/network.service
other places running ifup independently: grep -rs ifup /etc
and of course, "bash -x" and debugging "echo Bump" instruction. :-)
Long-term solution is fixing ifplugd to honour '-I' switch in this scenario. Mid-term solution is fixing /etc/sysconfig/network-scripts/ifup-eth for ignoring ifplugd return code. Short-term solution seems to be the most tricky, which is just removing all possible config factors triggering this live-lock. But this is the only one tolerating system autoupdates...
Execute tee /etc/modprobe.d/*blacklist*.conf <- "blacklist ideapad_laptop"
Then reboot. This should unblock your Wi-Fi.
I came here looking for a answer to my case so I'll share, maybe it will help someone else. I'd like to thank cPanel staff for pointing this out to me
As for the reported issues, we have seen the CloudLInux servers running a kernel version lower than "3.10.0-862" and update to Cloudlinux 7.7, they will get an update to the 'iproute' package.
The 'iproute' package needs to wither a newer kernel or to be excluded from updating onto the server initially.
This information has been reported. You can find some more information about it here:
https://www.cloudlinux.com/cloudlinux-os-blog/entry/cloudlinux-os-7-7-released
In my case
journalctl -xe
Shows there was a duplicate interface configuration eth0 & eno1 using the same UUID:
Nov 06 09:35:41 4200-150-137 /etc/sysconfig/network-scripts/ifup-eth[27549]: Device eno1 does not seem to be present, del
Nov 06 09:35:41 4200-150-137 network[27401]: [FAILED]
Nov 06 09:35:41 4200-150-137 network[27401]: Bringing up interface eth0: [ OK ]
removing the unused interface ifcfg file solved the problem for me.
After several trials including restarting of network manager, commenting out the UUID on the interface concerned (mine being ifcfg-eth0), it finally boiled down to a missing file which apparently needs to be included despite the fact that its values can be included directly in the interface file.
vi /etc/sysconfig/network
then add your right values and save:
NETWORKING=yes
HOSTNAME=xxx.xxx.xxx
GATEWAY=x.x.x.x
I hope this helps someone. It is tested on CentOS 7 as a guest VM on Hyper V on Windows 10.
I have VPS with OVH and have been struggling with similar issue.
Just wanna share my solution as it can help some people.
It used to delay boot by 5 minutes, dhclient was checking ipv6 on ifup call.
Set this to no
DHCPV6C=no
inside /etc/sysconfig/network-scripts/ifcfg-eth0
I know this is an old discussion but i had this problem on my bare metal server from ovh after disable NetworkManager service by installing CPanel
This issue solved by adding bellow parameters' in ifcfg-eno1 (or in your case any active interface)
LINKDELAY=31
NM_CONTROLLED=no
ONBOOT=yes
DHCPV6C=no
Also note that you have activated the network service

Change site configuration without restarting G-WAN

I'm looking at hosting a number of small, static websites and have been looking at a few alternatives including G-WAN. At the moment I'm just trying to get a feel for how well each server suits my needs before picking one.
G-WAN seems to do exactly what I want, though I'm running into problems with updating the configuration (by adding new folders) after the server's started. I can't find anything in the documentation or online about this, so I don't know if I'm doing anything dumb, running an unsupported configuration, or whether it's a feature that doesn't exist in G-WAN.
Here's my setup:
G-WAN 3.3.28 64-bit on Ubuntu 12.04.1 LTS.
I have what I think is the required minimal folder structure:
0.0.0.0_80
#0.0.0.0
www
$site.com
www
$othersite.com
www
I startup gwan via (I'm still messing around, so hopefully ):
sudo .\gwan -d
Everything works brilliantly. I add $thirdsite.com/, $thirdsite.com/www/, and $thirdsite.com/www/index.html; then when I try to visit thirdsite.com it gives me the root host (ie it doesn't seem to pick up the changes).
To reload the modified configuration, I have to either do:
sudo .\gwan -k; sudo .\gwan -d
or kill the non-angel process (kill -s 15) to restart the child process.
Can G-WAN reload the host definitions another way? If so, is it something that works out of the box or is there a command that can cycle the server without dropping requests made to other hosts (/is it safe to kill -s 15 on the non-angel process + if so, is there a reliable way to identify the process)? Thanks in advance!
G-WAN loads the host definitions at startup and does not check them as time goes to reload them dynamically.
To force a reload, you have to stop the child process (when in daemon mode) and v3.9+ keeps the old child alive the time to process any pending request while the new child accepts new connections.
Since stopping the child can also be done from the maintenance script or from a handler or from a servlet by just running exit(0) there is not need for a dedicated command.
Note that when you use kill you can pick the pid file from the gwan directory:
the parent process starts with a capital letter: Gwan_xxxx.pid
the child process starts with a lowercase letter: gwan_xxxx.pid
That will make your life easier.

Apache2 reload config from inside the CGI

I am working on a simple Perl app that copies another Perl app and builds all the required Apache config files.
The thing I can't seem to figure out is how to reload the apache config on the fly. I know I could do a system call and reload apache there, but that would mean I would have to get root access to this app, and that is a little scary.
Is there a way to ask apache to reload its config files from within the CGI container?
-------------------------Additional info------------------------------
I have done some more research and the problem is that Apache must be run with elevated privileges to bind to port 80. So one solution would be to set Apache to run on another port and forward that port to 80 via iptables. (This may be a last resort but a very messy solution).
Here is what gets me, Apache should be able to maintain its current port bindings and recheck its config files, all I am doing is adding another script alias.
Is there any way to add a new script alias with out a reload?
you also have the options to reload the config:
/etc/init.d/httpd reload
or
apachectl -k graceful
But unfortunately, those need root also. This differs from a normal restart in that currently open connections are not aborted. A side effect is that old log files will not be closed immediately. This means that if used in a log rotation script, a substantial delay may be necessary to ensure that the old log files are closed before processing them.
Also, if running Apache with daemontools you can do this by:
svc -h /service/apache
Sorry to ask a question then not give some one else the opportunity to answer but I figured out a solution and I hope it may help some one else.
What I had to do was leave the config alone it is not possible to reload in the manner that I required with out root privileges or some fancy port forwarding (That would make this application less portable than I would like).
So the only thing that Apache appears to load dynamically is the file system.
What I have done is used mod_rewrite to redirect the script requests and simply put them in /var/www/appname/copyname/cgi-bin/

Need an opinion on a method for pull data from a file with Perl

I am having a conflict of ideas with a script I am working on. The conflict is I have to read a bunch of lines of code from a VMware file. As of now I just use SSH to probe every file for each virtual machine while the file stays on the server. The reason I am now thinking this is a problem is because I have 10 virtual machines and about 4 files that I probe for filepaths and such. This opens a new SSH channel every time I refer to the ssh object I have created using Net::OpenSSH. When all is said and done I have probably opened about 16-20 ssh objects. Would it just be easier in a lot of ways if I SCP'd the files over to the machine that needs to process them and then have most of the work done on the local side. The script I am making is a backup script for ESXi and it will end up storing the files anyway, the ones that I need to read from.
Any opinion would be most helpful.
If the VM's do the work locally, it's probably better in the long run.
In the short term, the ~equal amount of resources will be used, but if you were to migrate these instances to other hardware, then of course you'd see gains from the processing distribution.
Also from a maintenance perspective, it's probably more convenient for each VM to host the local process, since I'd imagine that if you need to tweak it for a specific box, it would make more sense to keep it there.
Aside from the scalability benefits, there isn't really any other pros/cons.