Expose mongodb using cloudflare zero trust tunnels and connect via pymongo - mongodb

Hi I am currently trying to set up a mongo db on my home server and expose it to the internet using cloudflare tunnels.
I have a service up and running and have the following for the connection.
client = MongoClient('<DATABASE_URL>')
I get this error...
pymongo.errors.InvalidURI: Invalid URI scheme: URI must begin with 'mongodb://' or 'mongodb+srv://'
I am tunneling the default ip that mongo gives you.
UPDATE
I tested connecting to the db and just printing the database to the console. I got this result
Database(MongoClient(host=['<my_domain>:27107'], document_class=dict, tz_aware=False, connect=True), 'test_db')
I assume that because it says "connect=true" that means it is connecting to the database now.
I tried to add a collection to the database using an example I got online and this is the error I received...
Traceback (most recent call last):
File "/home/michael/mongo.py", line 18, in <module>
x = mycol.insert_one(mydict)
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/collection.py", line 628, in insert_one
self._insert_one(
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/collection.py", line 569, in _insert_one
self.__database.client._retryable_write(acknowledged, _insert_command, session)
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1475, in _retryable_write
with self._tmp_session(session) as s:
File "/home/michael/anaconda3/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1757, in _tmp_session
s = self._ensure_session(session)
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1740, in _ensure_session
return self.__start_session(True, causal_consistency=False)
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1685, in __start_session
self._topology._check_implicit_session_support()
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/topology.py", line 538, in _check_implicit_session_support
self._check_session_support()
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/topology.py", line 554, in _check_session_support
self._select_servers_loop(
File "/home/michael/anaconda3/lib/python3.9/site-packages/pymongo/topology.py", line 238, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: No servers found yet, Timeout: 30s, Topology Description: <TopologyDescription id: 63d172246419f5effc5e32d3, topology_type: Unknown, servers: [<ServerDescription ('<my_domain>', 27107) server_type: Unknown, rtt: None>]>
For reference this is what my pymongo test file looks like.
mongo.py
import pymongo
con = pymongo.MongoClient("mongodb://<my_domain>:27107")
db = con["test_db"]
mycol = db["customers"]
print(mycol)
print(db)
mydict = { "name": "John", "address": "Highway 37" }
x = mycol.insert_one(mydict)

If it's a standard installation, you need to make sure cloudflare tunnel is exposing port 27017. The ingress rule must be:
tcp://localhost:27017
To connect, just use:
pymongo.MongoClient("mongodb://user:psw#host.YourTLD/table")
It's a good idea to activate authentication if you're exposing the whole server to the internet. You can do it by setting authentication on the mongodb server, or at the cloudflare zero trust edge following this guide:
https://developers.cloudflare.com/cloudflare-one/tutorials/mongodb-tunnel/

I guess this is the case here:
you have a locally deployed mongodb (not some external VM)
you've set a Cloudflare tunnel in order to expose mongodb over dns
and you are having problems to connect to mongodb using that dns
So I've recently been trying to do the same, and I got over it with these steps:
First off, make sure that your service type, in Cloudflare Zero Trust, is TCP
URL is probably localhost, make sure you specified port
download cloudflared: Apple Silicon & everything else probably
run this on your local machine that you want to connect from: cloudflared access tcp --hostname <hostname you've set on Cloudflare ZT> --url <url you want to be forwarded to>. For example: cloudflared access tcp --hostname mongo.example.com --url localhost:3000
Then try to connect with your app to the localhost:3000.
How does this work?
Well, first you install cloudflared service, which forwards encrypted connection from an app on your machine to the outer internet.
You can protect access to that forwarded service/app using access rules. I also recommend protecting your app/service, you can do it from MongoDB or Cloudflare ZT, or both.
Then, you run cloudflared app on your target machine
connect to Cloudflare servers which forwards your MongoDB instance connection to the specified port on your local machine
you can access it as its local deployment

Related

no pg_hba.conf entry for host / Connect call failed / Invalid data directory for cluster 12 main - Postgresql Ubuntu

I'm trying to move my bot to an Ubuntu virtual server from Vultr but it's having a problem connecting to the postgres database. I've tried editing the config from md5 to true, and host to local, etc. But those only give me different errors and also make it stop working on my original machine too. It's working perfectly fine on my Windows machine. Here is the error I'm facing:
asyncpg.exceptions.InvalidAuthorizationSpecificationError: no pg_hba.conf entry for host "[local]", user "postgres", database "xxx", SSL off
So I've tried to change this line:
async def create_db_pool():
bot.pg_con = await asyncpg.create_pool(database='xxx', user='postgres', password='???')
to this:
async def create_db_pool():
bot.pg_con = await asyncpg.create_pool(database='xxx', user='postgres', password='???', ssl=True)
and that gives me this error:
asyncpg.exceptions._base.InterfaceError: `ssl` parameter can only be enabled for TCP addresses, got a UNIX socket path: '/run/postgresql/.s.PGSQL.5432'
So I don't know what else to try. I've been stuck on this for a while. If it's relevant, it connects at the bottom of the bot.py file like this:
bot.loop.run_until_complete(create_db_pool())
Whether ssl is True or not, the database seems to still function on my Windows machine. But I can't get it to work on my Ubuntu virtual server.
If I edit my config to this:
# TYPE DATABASE USER ADDRESS METHOD
# IPv4 local connections:
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::/0 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
host replication all 0.0.0.0/0 md5
host replication all ::/0 md5
Then I get a call error like this:
OSError: Multiple exceptions: [Errno 111] Connect call failed ('::1', 5432, 0, 0), [Errno 111] Connect call failed ('127.0.0.1', 5432)
This is really driving me crazy. I have no idea what to do. I bought this virtual server to host my bot on but I can't even get it to connect to the database.
When I simply type psql in the terminal, I get this error:
Error: Invalid data directory for cluster 12 main
Postgres is not working as intended in basically any way. I'm using Vultr.com to host the Ubuntu server, if that matters. And connecting with PuTTy.
Your pg_hba.conf has multiple syntax errors. The "localhost" connection type is not allowed at all, and the "local" connection type does not accept an IP address field. The server would refuse to start/restart with the file you show, and if you try to reload a running server it will just keep using the previous settings.
LOG: invalid connection type "localhost"
CONTEXT: line 4 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
LOG: invalid authentication method "127.0.0.1/32"
CONTEXT: line 5 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
LOG: invalid authentication method "::1/128"
CONTEXT: line 9 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
LOG: invalid connection type "localhost"
CONTEXT: line 10 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
LOG: invalid authentication method "127.0.0.1/32"
CONTEXT: line 102 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
FATAL: could not load pg_hba.conf
LOG: database system is shut down

failed to run jupyterhub when changing default port

I'm trying to set up jupyterhub. The 8000 is used for a different program, so I have to use a different port.
I change the file /etc/jupyterhub/jupyterhub_config.py add/uncomments:
c.JupyterHub.hub_port = 9003
c.JupyterHub.ip = '111.111.11.1'
c.JupyterHub.port = 9002
c.ConfigurableHTTPProxy.api_url = 'http://127.0.0.1:9000'
when I tried to running jupyterhub, I got the error:
[W 2020-06-03 14:48:48.930 JupyterHub proxy:554] Stopped proxy at pid=47639
[W 2020-06-03 14:48:48.932 JupyterHub proxy:643] Running JupyterHub without SSL. I hope there is SSL termination happening somewhere else...
[I 2020-06-03 14:48:48.932 JupyterHub proxy:646] Starting proxy # http://111.111.11.1:9002/
14:48:49.301 [ConfigProxy] info: Proxying http://111.111.11.1:9002 to (no default)
14:48:49.307 [ConfigProxy] info: Proxy API at http://127.0.0.1:9000/api/routes
14:48:49.315 [ConfigProxy] error: Uncaught Exception
[E 2020-06-03 14:48:49.437 JupyterHub app:2718]
Traceback (most recent call last):
File "/home/user/miniconda/2020.02/python/3.7/lib/python3.7/site-packages/jupyterhub/app.py", line 2716, in launch_instance_async
await self.start()
File "/home/user/miniconda/2020.02/python/3.7/lib/python3.7/site-packages/jupyterhub/app.py", line 2524, in start
await self.proxy.get_all_routes()
File "/home/user/miniconda/2020.02/python/3.7/lib/python3.7/site-pack#c.JupyterHub.hub_ip = '127.0.0.1'
ages/jupyterhub/proxy.py", line 806, in get_all_routes
resp = await self.api_request('', client=client)
File "/home/user/miniconda/2020.02/python/3.7/lib/python3.7/site-packages/jupyterhub/proxy.py", line 774, in api_request
result = await client.fetch(req)
tornado.httpclient.HTTPClientError: HTTP 403: Forbidden
What is the correct way to install jupyterhub on a port other than 8000?
Thanks.
I think some of these parameters are now obsolete, so it may depend which version you are running, but I'll assume JupyterHub 1.0+.
There are a few different services that make up JupyterHub, and the 'hub' service, confusingly, as not actually the one you are concerned with. The proxy is the main entrypoint to the application, and it proxies traffic to the hub by default, and to specific user Jupyter servers if the traffic is to a /user/ URL.
In addition, the 'hub' service also has an API endpoint that user servers can access directly (this doesn't go through the proxy). And the proxy has an extra API endpoint too, for direct access from the hub...
It is the proxy service that defaults to port 8000. To change to 80, for example try this:
## The public facing URL of the whole JupyterHub application.
#
# This is the address on which the proxy will bind. Sets protocol, ip, base_url
c.JupyterHub.bind_url = 'https://0.0.0.0:80'

pymongo basic functions not working [duplicate]

I was following a tutorial called "Black Hat Python" and got a "the requested address is not valid in its context" error. I'm Python IDE version: 2.7.12
This is my code:
import socket
import threading
bind_ip = "184.168.237.1"
bind_port = 21
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((bind_ip,bind_port))
server.listen(5)
print "[*] Listening on %s:%d" % (bind_ip,bind_port)
def handle_client(client_socket):
request = client_socket.rev(1024)
print "[*] Recieved: %s" % request
client_socket.close()
while True:
client,addr = server.accept()
print "[*] Accepted connection from: %s:%d" % (addr[0],addr[1])
client_handler = threading.Thread(target=handle_client,args=(client,))
client_handler.start()
and this is my error:
Traceback (most recent call last):
File "C:/Python34/learning hacking.py", line 9, in <module>
server.bind((bind_ip,bind_port))
File "C:\Python27\lib\socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 10049] The requested address is not valid in its context
>>>
You are trying to bind to an IP address that is not actually assigned to your network interface:
bind_ip = "184.168.237.1"
See the Windows Sockets Error Codes documentation:
WSAEADDRNOTAVAIL 10049
Cannot assign requested address.
The requested address is not valid in its context. This normally results from an attempt to bind to an address that is not valid for the local computer.
That may be an IP address that your router is listening to before using NAT (network address translation) to talk to your computer, but that doesn't mean your computer sees that IP address at all.
Either bind to 0.0.0.0, which will use all available IP addresses (both localhost and any public addresses configured):
bind_ip = "0.0.0.0"
or use any address that your computer is configured for; run ipconfig /all in a console to see your network configuration.
You probably also don't want to use ports < 1024; those are reserved for processes running as root only. You'll have to pick a higher number than that if you want to run an unprivileged process (and in the majority of tutorials programs, that is exactly what you want):
port = 5021 # arbitrary port number higher than 1023
I believe the specific tutorial you are following uses BIND_IP = '0.0.0.0' and BIND_PORT = 9090.
I was just getting this error while following this Python TCP example and the solution was to have my client connect using 'localhost' instead of '0.0.0.0'.

No Response, fetching data from remote Server : Using Sphinx SetServer

I'm Using Sphinx 2.0.5-release On both Server .
Both Server have same indexers.I have Searchd running on both server . But I would like
to fetch Data of Server 1 from Server 2.
I used this particular code :
$cl = new SphinxClient;
$cl->SetServer(remote_sphinx_server,9312); (remote_sphinx_server : IP address of 2nd
Server)
$cl->SetMatchMode(SPH_MATCH_EXTENDED);
$result = $cl->Query("","$indexer");
But I Don't get any response .
Im getting error : connection to "Server 2 IP:9312" failed (errno=113, msg=No route to
host)
If i Use below code :
$cl = new SphinxClient;
$cl->SetMatchMode(SPH_MATCH_EXTENDED);
$result = $cl->Query("","$indexer");
I get proper response. As the data is coming from local Sphinx .
What can be the problem fetching data from remote Server ? Any Help is very much
appreciated .
Thank you
you may have multiple network interfaces on server 2 and you are using one the IPs that is not reachable by server 1
check if firewall allows communication on port 9312
check if searchd does run on server 2. Also, by default, searchd opens the port on all available interfaces, unless specified. Check searchd.log if reports any error about opening the port(s).

How to specify port for PostgreSQL?

I am lost. I have localhost database (PostgreSQL) and I have to add port for connection (in app.config -- connection string). I alredy tried:
localhost:port
localhost,port
(localhost),port
(local),port
None of this work, everytime I got error "The requested name is valid, but no data of the requested type was found" thrown by System.Net.Dns.InternalGetHostByName with message "cannot open connection".
So how do you specify the port? I checked this on computer with just single instance of DB server, so port could be ommitted and then it works. But I need adding port.
Update
<add key="ConnectionString" value="Server=localhost;
Port=5434;
Database=XXXXXXX;Initial Catalog=XXXXXXXXX;
UserID=XXXXX;Password=XXXXX;Encoding=UNICODE;" />
Now it works with both "localhost" and "127.0.0.1" (direct IP).
Use a separate keyword for the port:
Server=127.0.0.1;Port=...;User Id=...;Password=...;Database=...;