How to connect to a peerjs server from external host - server

I'm trying to use a raspberry pi 4 as a simple peerjs server.
Here's my command that works well in local: peerjs --port 9000 --key peerjs --path /videocallapp
I opened my router like this
But I can't connect to it using my simple peerjs javascript client set up like that : host: '192.***.*.**', port: 9000, path: '/videocallapp'
Host here is the ip of raspberry
Can you please help me?
What I precisly don't know is: does the host ip written in the client is wrong or is it the port that is wrongly set up

I solved it : here's the configuration
server: peerjs --port 9000 --key peerjs --path /videocallapp
router: Web Server (HTTP) internal:9000 external:80 protocol:TCP equipment: raspberrypi
client: ip:your public Ip! (https://canyouseeme.org/) port:80

Related

FastAPI + Uvicorn could not accept external connect

I planed to deploy OpenSearch with Python FastAPI + Uvicorn.
I upload FastAPI python script and start server with uvicorn command, only internal connection work and external not.
This is my env and and options.
CentOS7
python 3.9
fastapi 0.79.1
uvicorn 0.16.0
CORS setting...
origins = [
'*'
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=False,
allow_methods=["POST"],
allow_headers=["*"]
)
uvicorn
host: 0.0.0.0
port: 5650
port state
5650 port's state is LISTEN when I check netstat -tulnp
and also firewalld is disabled.
External Connection is possible with OpenSearch Node and Dashboards (port 9200, 5601)
Only FastAPI + Uvicorn couldn't...
Sorry for my short english...
I need some help...

Error while proxying request kubectl proxy

I'm trying to follow this documentation https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/.
After running this command
kubectl proxy --port=8080 &
I get the output
Starting to serve on 127.0.0.1:8080
However when I run "curl http://localhost:8080/api/" to hit the server, I get this response
dial tcp: lookup localhost: no such host
Any idea's why I would get this response?
EDIT:
I'm using a VPN to connect to corporate network. When I disable the VPN I still get the same message for both localhost and 127.0.0.1 (same exact message for both).
I'm not using kubeadm.
When I run host localhost I get this output
localhost has address 127.0.0.1
localhost has IPv6 address ::1
cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
It is possible you might have http_proxy set on you system , so try curl with noproxy as following :
curl --noproxy '*' http://localhost:8080
If you want to check if the http_proxy is set :
echo $http_proxy

SSH Tunnel for MongoDb Connection Within VPC

I am attempting to tunnel from my localhost (on port 24000) via a Bastion box to my mongo instance (on 27017) that is only available via the VPC private subnet so that I may develop locally whilst connected to the staging db. Using this tunnel command on my OSX box:
ssh -A -L 24000:ip-10-0-11-11.ec2.internal:27017 ec2-3-211-555-333.compute-1.amazonaws.com -N -v
"ip-10-0-11-11.ec2.internal" is the mongo box.
"ec2-3-211-555-333.compute-1.amazonaws.com" is the bastion box.
Aiming to bind local port 24000 to the bastion then from there to the mongo box on 27017.
However upon trying to connect via the tunnel from my local box with:
mongo -u dbUser localhost:24000/db-name
The connection is timing out. Below is the verbose output from ssh tunnel command (presumably from the bastion?).
debug1: channel 3: free: direct-tcpip: listening port 24000 for ip-10-0-11-11.ec2.internal port 27017, connect from 127.0.0.1 port 63451 to 127.0.0.1 port 24000, nchannels 4
channel 4: open failed: connect failed: Connection timed out
Seems to try to be working but it is just not. Any and all help would be appreciated! I do have ssh forwarding enabled on the bastion via the sshd config. I can also connect to the mongo instance while on the bastion no problem.
Circling back... not sure how I got it working or why it wasn't working, but for those looking forward the ssh command to open a tunnel forwarding the keys in your ssh-agent this command is indeed the way todo.
ssh -A -L 24000:ip-10-0-20-141.ec2.internal:27017 ec2-54-165-159-177.compute-1.amazonaws.com -N -v

python web: Safari can't connect to the server

I used vagrant ubuntu server(16.04): 127.0.0.1, and port is 2222 for developing web application, the test code (app.py)as following:
import logging; logging.basicConfig(level=logging.INFO)
import asyncio, os, json, time
from datetime import datetime
from aiohttp import web
def index(request):
return web.Response(body=b'<h1>Awesome</h1>')
#asyncio.coroutine
def init(loop):
app = web.Application(loop=loop)
app.router.add_route('GET', '/', index)
srv = yield from loop.create_server(app.make_handler(), '127.0.0.1', 2222)
logging.info('server started at http://127.0.0.1:9000...')
return srv
loop = asyncio.get_event_loop()
loop.run_until_complete(init(loop))
loop.run_forever()
After I run the code in the ubuntu server as following:
and then I want to test the app in browser, but there is no response! and error:
The issue is that the server is listening on IP 127.0.0.1 on the VM so you will not be able to access from your host.
If you want to access it from your host browser, you'd need to run your server on a dedicated IP or on the 0.0.0.0 IP so change to
srv = yield from loop.create_server(app.make_handler(), '127.0.0.1', 9000)
then make sure to forward this port from your Vagrantfile:
config.vm.network "forwarded_port", guest: 9000, host: 9000
and you'll be able to access it on http://localhost:9000 from your host
the default 2222 port is from forwarded 22 from host machine ssh.. are you sure you port_forwarded setting in your Vagrantfile is correct..?also check in you host machine if the port that you want to open is really open using sudo netstat -ntlp usually vagrant port will have a PID of VBOXHeadless

Check if port is open from the server itself

I've a Centos 6.5 - Apache server. This server is in a private LAN (it has a private IP 10.x.x.x) and is linked to a domain name. If I test port 443 from webtools with the domain name, it seems that it's blocked, but I want to understand if it's blocked from a firewall outside the server or if it depends from the server configuration. Is there anyway I can check if the port is open for the server?
iptables firewall is empty.
You could simply try to telnet from the server to itself.
So if you'd want to check if port 443 is responding, run:
telnet localhost 443
if the response is
telnet: Unable to connect to remote host: Connection refused
then there's probably nothing listening on that port.