How to completely remove an installed CentOS RPM file? - centos

Folks, how do I make sure all files of a RPM (CENTOS) where removed? The problem is that I've installed a software called shiny-proxy https://www.shinyproxy.io/ - and after 3 days running as a test server, we received a message called NetScan Detected from Germany. Now we want to clean everything up removing the RPM but it seams not that easy as something else is left on the system that continues to send and receive lots of packages (40kps). I really apologize shinyproxy folks if that is not part of their work, so far this is the last system under investigation.

your docker API is bound to your public IP and therefore directly reachable from an external network. You should not do this as it would allow anybody to run arbitrary docker instances and even commands on your docker host.
You should secure your docker install:
- bind it to 127.0.0.1 (lo) interface and adapt the shinyproxy yml file accordingly
- setup TLS mutual auth (client certificate) on the docker API (it is supported by shinyproxy)

Related

Whatsapp Business API production setup not working

I am trying to configure or setup the production environment of whatsapp business api as mentioned in the link https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
I have done everything mentioned in this my dockers are also running on port:9090 as can be seen in the image
still I can't access it. Whenever I try to call https://localhost:9090 the error with "This site can’t be reached" occurs. Whatsapp business api does not have good documentation or tutorials till now. So this site is the only last way for me.
I had a similar problem which could be your case, I saw the docker containers OK but nothing was working. After a day searching I saw where it happened, my problem was I installed mysql MANUALLY (not docker container) in the same instance where docker is running and in db.env I just used 127.0.0.1, this was passed literally to docker container, then looking at a the wait_on_mysql.sh script, the whastapp docker containers were waiting util the mysql ip has conectivity to actually do something and was printing "MySQL is not up yet - sleeping" each second, of course they wouldn't find any conectivity.
Since my instalation is for development, and I am already using such database to other stuff, my solution was to use the 172.17.0.1(docker gateway of the containers) IP instead, then add two sets of network iptables rules to the host to redirect from the docker containers IP to the IP binded by mysql when using such port (3306, the default in my case). After that everything works well. I think there are better solutions, but I didn't want to go far on it, you should evaluate you case if apply.
check the command:
docker-compose logs > debug_output.txt
That gives you insight about whats happening, hope it can helps someone.
I think your setup is already complete. You just need to start with the registration process and start sending messages. The containers are up and running but calling https://localhost:9090 won't send you any response as this is not any specified API endpoint expected to be used.
Since you're using prod single instance, the documentation can be found here which seems pretty straight forward. https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
You seem to have completed till the 7 steps. The next step can be to perform a health check to make sure it is healthy. The API endpoint for that would be https://localhost:9090/v1/health https://developers.facebook.com/docs/whatsapp/api/health
Has your db also been setup?
I cannot see it in the docker screenshot.
Also - you have to accept the certificate, as it does not have a public CA issues certificate.

Apache CloudStack: No templates showing when adding instance

I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.

NFS V4 at FreeBSD hosted, both client and server, mounts OK but there is no read or write on the filesystem, reporting Input/output error

I have successfully mounted and used NFS Version 4 having Solaris server and FreeBSD client.
Problem is when having FreeBSD server and FreeBSD client at version 4. Version 3 works excellent.
I use FreeBSD NFS server since FreeBSD verson 4.5 (then having IBM AiX clients).
The problem:
mounts OK, but there are no principals appear at the kerberos cache, and when trying to read or write on the mounted filesystem I get the error: Input/output error
nfs/server-fqdn#REALM and nfs/client-fqdn#REALM principals are created at kerberos server and stored at keytab files properly at both sides.
I issue tgt tickets from the KDC using the above for both sides for the root's kerberos cache.
I start services properly:
file /etc/rc.conf
rpcbind_enable="YES"
gssd_enable="YES"
rpc_statd_enable="YES"
rpc_lockd_enable="YES"
mountd_enable="YES"
nfsuserd_enable="YES"
nfs_server_enable="YES"
nfsv4_server_enable="YES"
then I start services
at client: rpcbind, gssd, nfsuserd,
at server all above having the exports file:
V4: /marble/nfs -sec=krb5:krb5i:krb5p -network 10.20.30.0 -mask 255.255.255.0
I mount:
# mount_nfs -o nfsv4 servername:/ /my/mounted/nfs
#
# mkdir /my/mounted/nfs/e
# mkdir: /my/mounted/nfs/e: Input/output error
#
Same result for even an ls command.
klist does not show any new principals at root's cache, or any other cache.
The amazing performance at version 3 I love, but need local lock files feature of NFS4.
Second reason is security. I need kerberised RPC calls (-sec=krbp).
If anyone of you has achieved this using FreeBSD server for NFS Version 4, please give a feedback to this question, I'll be glad if you do.
Comments are not good to give code examples. Here is the setup of FreeBSD client and FreeBSD server that works for me. I don't use Kerberos but if you make it working with this minimal configuration then you can add Kerberos afterwards (I believe).
Server rc.conf:
nfs_server_enable="YES"
nfs_server_flags="-u -t -n 4"
nfsv4_server_enable="YES"
nfsuserd_enable="YES"
mountd_flags="-r"
Server /etc/exports:
/parent/path1 -mapall=1001:1001 192.168.2.200
/parent/path2 -mapall=1001:1001 192.168.2.200
... (more shares)
V4: /parent/ -sec=sys 192.168.2.200
Client rc.conf:
nfs_client_enable="YES"
nfs_client_flags="-n 4"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
Client fstab:
192.168.2.100:/path1/ /mnt/path1/ nfs rw,bg,late,failok,nfsv4 0 0
192.168.2.100:/path2/ /mnt/path2/ nfs rw,bg,late,failok,nfsv4 0 0
... (more shares)
As you see the client tries to mount only what's after the /parent/ path specified in the V4 line on the server. 192.168.2.100 is server IP and 192.168.2.200 is the client IP. This setup will only allow that one client connect to the server.
I hope I haven't missed anything. BTW please rise questions like this on SuperUser or ServerFault rather than StackOverflow. I am surprised this question hasn't been closed yet because of that ;)

Proxy setting in gsutil tool

I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.

Mapping CentOS NFS to another CentOS Server

CentOS 5.5
I have a web application running on a server and it needs access to another CentOS server's file system running in the same network (via private IP). After doing a bunch of googling it looks like mounting the drive via NFS is a good way to go, but I'm not finding any good step by step instructions on how to go about it. I've read the man docs on the mount command and read some docs on the CentOS wiki as well but I feel like I'm missing something. Here is what I'm trying
mount -t nfs my.ip.address:/somePath /somePath/mount
I keep getting a 'no route to host' error but I can ping the server just fine. I'm guessing that I am possibly missing a port I need to open or something, but again, can't find information that makes sense to a non-sysadmin like myself.
Thanks for any help.
I ran across this, followed it step by step, and now I'm up and running!
http://www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/