AWS VPN tunnel established and able to ping against each other, but curl doesn't work - amazon-vpc

I'm trying to connect local machine to AWS VPC using site-to-site VPN.
I've used ipsec protocol using libreswan, and succeeded to establish tunnel.(Confirmed at aws console that tunnel status is 'UP').
I was also bled to ping to each other, but not possible to execute curl command. (it hung for a while and got timed-out.)
Do I need to do any other steps? Or any way to debug this issue?
Thanks!

May be so that the ping and the curl command use different ports. If so you should set up routes for the curl commands. Standard ports for curl is 80/443.
AWS docs for route tables for VPC:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html

Related

connecting wget to vpn

I'm trying to download some files using wget but the problem is the files will only download from specific servers how can I use wget over VPN?
p s: I tried use_proxy=yes -e http_proxy=[server]:[port] but it didn't work I need to connect to a VPN server not a proxy
Install a VPN on your machine first, then run the command
Proxies and VPNs are entirely different things. The proxy functionality won't be of any use to you here.
To use a VPN you have to setup a connection at the OS level (i assume linux ? but i could be wrong) - the wget tool itself wont be involved, you'll just run that after your connection is replaced with the VPN connection (no need for any special flags).
As for how you setup the vpn connection, that differs a lot based on the particular details of your situation. It could involve running openvpn yourinfo.ovpn or something like that, or your vpn provider may offer a separate application to set up the tunnel connection and then adjust your OS's routing table so traffic flows through the tunnel instead of to the normal gateway.

Greenbone Security Assistant 7.0.3 Host HTTP Header

Recently I've set up an Amazon EC2 instance of Ubuntu 16.04 that was authorized to scan an IP block. The version of GSA that I have installed is 7.0.3. Currently, I can locally access GSA through the EC2 instance or remotely using my public Amazon elastic IP.
Additionally, I've allowed external access to GSA's listening port from my IP block. Currently, I can access GSA without any problems using my instances static public IP over HTTPS.
The problem that I'm currently running into is accessing GSA using a FQDN.
For example, I want to be able to use https://gsa.mydomain.com
My local DNS server has an A record with the FQDN and my EC2 instances public IP.
On my instance, I ran the following command.
sudo gsad --allow-header-host gsa.mydomain.com
However, browsing to https://gsa.mydomain.com produces the following error.
"The request contained an unknown or invalid Host header. If you are trying to access GSA via its hostname or a proxy, make sure GSA is set up to allow it."
Neither restarting GSA services or my instance had no effect.
Clearly, DNS is working but the host header command is not.
Any thoughts on how I can make this happen?
Additionally, for reference, I used the following URL
https://github.com/greenbone/gsa/pull/318
In ubuntu/debian edit /etc/default/openvas-gsa file and set ALLOW_HEADER_HOST=HOSTNAME
where HOSTNAME is your host name in the browser address line.
I'm using Kali and was able to figure this out my modifying the systemd service files. Modify the file /lib/systemd/system/greenbone-security-assistant.service, adding the --allow-header-host gsa.mydomain.com to the end of the ExecStart line.
For example, change the line from:
ExecStart=/usr/sbin/gsad --foreground --listen=<internal IP> --port=<configured web server port> --mlisten=<internal IP> --mport=<configured management port>
to:
ExecStart=/usr/sbin/gsad --foreground --listen=<internal IP> --port=<configured web server port> --mlisten=<internal IP> --mport=<configured management port> --allow-header-host gsa.mydomain.com
Then run:
systemctl daemon-reload
systemctl restart greenbone-security-assistant.service openvas-manager.service openvas-scanner.service

Connection to Google Cloud SQL via proxy works in all scenarios except via socket in Docker container

Hopefully I'm doing something wrong, I've read all documentation and scoured forums but can't seem to get to the bottom of an issue I'm experiencing. I'm using OSX btw.
Things that are working:
Connect to cloud SQL from local OS using proxy via either TCP or Socket
Connect to cloud SQL from local OS using proxy in container via TCP
Connect to cloud SQL from GKE using proxy in the same pod via TCP
Things that are not working:
Connect to cloud SQL from local OS using proxy in contain via sockets
Connect to cloud SQL from GKE using proxy in the same pod via socket
I suspect both of these problems are actually the same problem. I'm using this command to run the proxy inside of the container:
docker run -v [PATH]:/cloudsql \
gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy -dir=/cloudsql \
-instances=[INSTANCE_CONNECTION_NAME] -credential_file=/cloudsql/[FILE].json
And the associated socket is being generated with the directory. However when I attempt to connect I get the following error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/cloudsql/node-sql:us-central1:nodedb' (61)
The proxy doesn't generate a new line when I try to connect which makes me think that it's not receiving the request, it simply says Ready for new connections and waits.
Any idea what's going wrong, or how I could troubleshoot this further?
For "Connect to cloud SQL from GKE using proxy in the same pod via socket" can you please follow the tutorial at https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine? We have a working WordPress example there that has the cloudsql-proxy as a sidecar container (i.e. in the same Pod, but over TCP).
I don't think you can do "in the same pod via socket" unless you’re running multiple processes in a single container (which you shouldn’t as a best practice). If you do a sidecar container, you can use TCP, so you don’t need a unix socket (moreover, I'm not sure how you’d share files between containers of a Pod).
Also, the docker run -v /local.sock:/remote.sock (I think) will be creating a file/directory locally as /local.sock and making that available inside the container as /remote.sock. This might not work because the docker-engine doesn't know that /local.sock is meant to be a Unix socket and it creates a regular file.

Configuring Postgresql as a service on OpenShift v3

I'm trying to configure a Postgresql pod on OpenShift 3 for external access and I'm unable to expose it to the outside world. I have created a route, but it is not responding to TCP on port 5423 whenever I try to connect to the host over the internet.
The message I get is: "Is the server running on host "xxxxxxx.1d35.starter-us-east-1.openshiftapps.com" (xx.xx.xx.xx) and accepting TCP/IP connections on port 5432?"
Routes can only be used to expose HTTP/HTTPS servers, or when using TLS pass through the service is terminating the secure connection and the client for the services support SNI over TLS.
For a database such as PostgreSQL you can though temporarily expose it to your local machine by using the oc port-forward command. You can find an interactive tutorial for how to use port forwarding in the OpenShift interactive learning portal at:
https://learn.openshift.com/
In OpenShift Online there is no way to expose a database service such as PostgreSQL permanently outside of the cluster. This is because exposing it would require admin access, which you don't have with OpenShift Online.

Connecting Orion Context Broker from another machine

I can't connect to ContextBroker from another machine, even a machine in the same LAN.
Accessing by ssh without any problem
ssh geezar#192.168.1.115
and then
curl localhost:1026/statistics
the terminal shows the statistics, all right
<orion>
<xmlRequests>3</xmlRequests>
<jsonRequests>1</jsonRequests>
<updates>1</updates>
<versionRequests>1</versionRequests>
<statisticsRequests>2</statisticsRequests>
<uptime_in_secs>84973</uptime_in_secs>
<measuring_interval_in_secs>84973</measuring_interval_in_secs>
</orion>
But when I try without ssh connection...
curl 192.168.1.115:1026/statistics
curl: (7) Failed to connect to 192.168.1.115 port 1026: No route to host
Even, I routed the port 1026 to that machine (192.168.1.115) on the router configuration, and tried to access from my public IP, the result is the same, failed to connect
I think I am missing something, but.. what is it?
The most probable causes of this problem are:
Something in the host (e.g a firewall or security group) is blocking the incoming connection
Something in the client (e.g a firewall) is blocking the outcoming connection
There is some other network issue is causing the connection problem.
EDIT: in GNU/Linux system, iptables is usually used as firewall. It can be disabled typically running iptables -F.