MAMP on macOS ventura 13.2 apple silicon M2 can't use port 80? - mamp

I try to use MAMP (v6.7) with apache on port 80.
Error: The port 80 is already in use
The Mac is a fresh new install without anything installed on it, macOS 13.2 on M2 apple silicon.
If I try on other port like 8080 or 8888 it's ok, but I need port 80.
sudo lsof -P -n -iTCP -sTCP:LISTEN
rapportd 398 eddy 4u IPv4 0xe87a21396cd21eb9 0t0 TCP *:52332 (LISTEN)
rapportd 398 eddy 8u IPv6 0xe87a2134a3c715b1 0t0 TCP *:52332 (LISTEN)
ControlCe 426 eddy 5u IPv4 0xe87a21396cd413a9 0t0 TCP *:7000 (LISTEN)
ControlCe 426 eddy 6u IPv6 0xe87a2134a3c706b1 0t0 TCP *:7000 (LISTEN)
ControlCe 426 eddy 7u IPv4 0xe87a21396cd429c9 0t0 TCP *:5000 (LISTEN)
ControlCe 426 eddy 8u IPv6 0xe87a2134a3c70e31 0t0 TCP *:5000 (LISTEN)
cloud-dri 550 eddy 50u IPv4 0xe87a21396cd47739 0t0 TCP 127.0.0.1:49156 (LISTEN)
cloud-dri 555 eddy 4u IPv4 0xe87a21396cd45609 0t0 TCP 127.0.0.1:49154 (LISTEN)
cloud-dri 567 eddy 4u IPv4 0xe87a21396cd3bfe9 0t0 TCP 127.0.0.1:49158 (LISTEN)
cloud-dri 567 eddy 50u IPv4 0xe87a21396cd47739 0t0 TCP 127.0.0.1:49156 (LISTEN)
mysqld 4435 eddy 31u IPv6 0xe87a2134a3c76fb1 0t0 TCP *:3306 (LISTEN)
I have tried the new "Indigo Stack" app with apache on port 80, exactly same error, the port 80 already in use.
How can I found what is listen on port 80?
Any idea? Many thanks in advance.

On my M1Pro MacBook Pro, MacOS 13.2 I'm having the same issue.
As a temporary workaround, I've been able to get MAMP running using port 80 by using the scripts in the MAMP directory:
Start: /Applications/MAMP/bin/start.sh (this is same as clicking Start in MAMP)
Stop: /Applications/MAMP/bin/stop.sh. (this is same as clicking Stop in MAMP)
There's a few other scrips in /Applications/MAMP/bin/ that might be useful to check out.
Side note, I'm following this reported issue https://bugs.mamp.info/view.php?id=9913 for an offical fix (had to sign up to view :/ but there's a few more reports of the same issue this week so it's not an isolated issue)

Related

HAProxy creates thousands of connections with itself

I'm not an expert in HAProxy. What I see is over time haproxy seems to accumulate (tens of)thousands of TCP sessions, and the source seems to be the same as the server...?
Why is it creating sessions with ports other than 8123 it's bound to?
frontend tcp_front
bind *:8123
mode tcp
default_backend host_sub5
backend host_sub5
mode tcp
server node2 0.0.0.0:8123 check
show sess - this is a tiny fraction but the ports seem to grow sequentially like it's a netscan
haproxy 772263 haproxy *263u IPv4 626477013 0t0 TCP 127.215.21.22:38423->127.215.21.22:8123 (ESTABLISHED)
haproxy 772263 haproxy *264u IPv4 626477014 0t0 TCP 127.215.21.22:8123->127.215.21.22:38423 (ESTABLISHED)
haproxy 772263 haproxy *265u IPv4 626477016 0t0 TCP 127.215.21.22:38435->127.215.21.22:8123 (ESTABLISHED)
haproxy 772263 haproxy *266u IPv4 626477035 0t0 TCP 127.215.21.22:8123->127.215.21.22:38435 (ESTABLISHED)
haproxy 772263 haproxy *267u IPv4 626477037 0t0 TCP 127.215.21.22:38437->127.215.21.22:8123 (ESTABLISHED)
haproxy 772263 haproxy *268u IPv4 626477041 0t0 TCP 127.215.21.22:8123->127.215.21.22:38437 (ESTABLISHED)

Cannot access exposed deployment/pod in Kubernetes

I want to start out by saying I do not know the exact architecture of the servers involved.. All I know is that they are Ubuntu machines on the cloud.
I have set up a 1 master/1 worker k8s cluster using two servers.
kubectl cluster-info gives me:
Kubernetes master is running at https://10.62.194.4:6443
KubeDNS is running at https://10.62.194.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I have created a simple deployment as such:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy
spec:
replicas: 2
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
Which spins up an nginx pod exposed on container port 80.
I have exposed this deployment using:
kubectl expose deployment nginx-deploy --type NodePort
When I run kubectl get svc, I get:
nginx-deploy NodePort 10.99.103.239 <none> 80:30682/TCP 29m
kubectl get pods -o wide gives me:
nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 33m 192.168.1.5 myserver1 <none> <none>
nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 33m 192.168.1.4 myserver1 <none> <none>
Since I exposed the deployment using NodePort, I was under the impression I can access the deployment using < Node IP > : < Node Port >
The Node IP of the worker node is 10.62.194.5 and when I try to access http://10.62.194.5:30682 I do not get the nginx landing page.
One part I do not understand is that when I do kubectl describe node myserver1, in the long output I receive I can see:
Addresses:
InternalIP: 10.62.194.5
Hostname: myserver1
Why does it say InternalIP? I can ping this IP
EDIT:
Output of sudo lsof -i -P -n | grep LISTEN
systemd-r 846 systemd-resolve 13u IPv4 24990 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 1157 root 3u IPv4 30168 0t0 TCP *:22 (LISTEN)
sshd 1157 root 4u IPv6 30170 0t0 TCP *:22 (LISTEN)
xrdp-sesm 9840 root 7u IPv6 116948 0t0 TCP [::1]:3350 (LISTEN)
xrdp 9862 xrdp 11u IPv6 117849 0t0 TCP *:3389 (LISTEN)
kubelet 51562 root 9u IPv4 560219 0t0 TCP 127.0.0.1:42735 (LISTEN)
kubelet 51562 root 24u IPv4 554677 0t0 TCP 127.0.0.1:10248 (LISTEN)
kubelet 51562 root 35u IPv6 558616 0t0 TCP *:10250 (LISTEN)
kube-prox 52427 root 10u IPv4 563401 0t0 TCP 127.0.0.1:10249 (LISTEN)
kube-prox 52427 root 11u IPv6 564298 0t0 TCP *:10256 (LISTEN)
kube-prox 52427 root 12u IPv6 618851 0t0 TCP *:30682 (LISTEN)
bird 52925 root 7u IPv4 562993 0t0 TCP *:179 (LISTEN)
calico-fe 52927 root 3u IPv6 562998 0t0 TCP *:9099 (LISTEN)
Output of ss -ntlp | grep 30682
LISTEN 0 128 *:30682 *:*
As far as I understand you are trying to access 10.62.194.5 from a Host which is in a different subnet, for example your terminal. In Azure I guess you have a Public IP and a Private IP for each Node. So, if you are trying to access the Kubernetes Service from your terminal, you should use the Public IP of the Host together with the port and also be sure that the port is open in your azure firewall.

iptables are not forwarding any traffic to HAProxy

I've got the following problem:
My Router (FritzBox) is set to forward all incomming traffic (via exposed host) to my Server (192.168.0.1)
I have a HAProxy running on a lxc container (192.168.0.100) which is forwarding http traffic to some other lxc containers - this is working fine.
The problem is, when I run the following command (curl to my proxy) I get the right answer back:
curl --verbose --header 'Host: myrealdomain.tld' http://192.168.0.100
* Rebuilt URL to: http://192.168.0.100/
* Trying 192.168.0.100...
* Connected to 192.168.0.100 (192.168.0.100) port 80 (#0)
> GET / HTTP/1.1
> Host: murdr.eu
> User-Agent: curl/7.47.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
< Cache-Control: no-cache
< Connection: close
< Content-Type: text/html
<
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
* Closing connection 0
But when running the same command (curl to my server which should forward the traffic to the Proxy) I can't connect, because the connection is refused:
curl --verbose --header 'Host: myrealdomain.tld' http://192.168.0.1
* Rebuilt URL to: http://192.168.0.1/
* Trying 192.168.0.1...
* connect to 192.168.0.1 port 80 failed: Connection refused
* Failed to connect to 192.168.0.1 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.0.1 port 80: Connection refused
(myrealdomain.tld is replaced with my real domain, here I changed it for security reasons)
Here are my iptables (I've tested various things and flushed then often, but nothing was working).
I've cleared them now, better start fresh
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
iptables -S (as asked by Luke Mlsna)
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
apache2 was running on my server. but I deleted it after setting up the proxy container and the iptables.
here are the open ports, no port 80
lsof -i -P -n
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd-n 938 systemd-network 19u IPv6 33240 0t0 UDP [fe80::f64d:30ff:fe66:8010]:546
systemd-r 980 systemd-resolve 12u IPv4 22967 0t0 UDP 127.0.0.53:53
systemd-r 980 systemd-resolve 13u IPv4 22968 0t0 TCP 127.0.0.53:53 (LISTEN)
nmbd 1108 root 15u IPv4 22474 0t0 UDP *:137
nmbd 1108 root 16u IPv4 22475 0t0 UDP *:138
nmbd 1108 root 17u IPv4 38559 0t0 UDP 192.168.0.1:137
nmbd 1108 root 18u IPv4 38560 0t0 UDP 192.168.1.255:137
nmbd 1108 root 19u IPv4 38561 0t0 UDP 192.168.0.1:138
nmbd 1108 root 20u IPv4 38562 0t0 UDP 192.168.1.255:138
sshd 1200 root 3u IPv4 25135 0t0 TCP *:22 (LISTEN)
sshd 1200 root 4u IPv6 25137 0t0 TCP *:22 (LISTEN)
lxd 1273 root 13u IPv6 27850 0t0 TCP *:8443 (LISTEN)
mysqld 1501 mysql 39u IPv4 27943 0t0 TCP 127.0.0.1:3306 (LISTEN)
smbd 3606 root 32u IPv6 37803 0t0 TCP *:445 (LISTEN)
smbd 3606 root 33u IPv6 37804 0t0 TCP *:139 (LISTEN)
smbd 3606 root 34u IPv4 37805 0t0 TCP *:445 (LISTEN)
smbd 3606 root 35u IPv4 37806 0t0 TCP *:139 (LISTEN)
sshd 6140 root 3u IPv4 59450 0t0 TCP 192.168.0.1:22->192.168.0.43:62339 (ESTABLISHED)
sshd 6350 unicorn 3u IPv4 59450 0t0 TCP 192.168.0.1:22->192.168.0.43:62339 (ESTABLISHED)
I'm sending my traffic now directly to HAProxy from my router, no server in between.
Working as a charm!

Windows Firewall Inbound Rules not matching netstat listening ports

I'm not a firewall expert, so need some help with understanding the difference between my windows firewall rules and what netstat is displaying. Some computers at my company only allow inbound traffic on several ports due to regulations, all other ports are blocked by default.
For example, one computer might allow TCP 20,21,23,80,443,445, and 3389.
When I do a netstat command however, I see many "listening" ports that should not be allowed:
Proto Local Foreign State
TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING
TCP 0.0.0.0:9001 0.0.0.0:0 LISTENING
TCP 0.0.0.0:9002 0.0.0.0:0 LISTENING
TCP 0.0.0.0:16992 0.0.0.0:0 LISTENING
TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING
TCP 0.0.0.0:49152 0.0.0.0:0 LISTENING
TCP 0.0.0.0:49153 0.0.0.0:0 LISTENING
TCP 0.0.0.0:49154 0.0.0.0:0 LISTENING
TCP 0.0.0.0:49155 0.0.0.0:0 LISTENING
TCP 0.0.0.0:49156 0.0.0.0:0 LISTENING
TCP 0.0.0.0:49166 0.0.0.0:0 LISTENING
TCP 0.0.0.0:49178 0.0.0.0:0 LISTENING
I need some help with understanding why the two do not agree...are these ports trying to listen on a particular port, but the firewall won't allow any traffic to pass through to them?
Thank you.
The inbound firewall rules prevent hosts from successfully connecting to ports on the local system. These can be written to prevent external hosts from connecting (typical) and can even be written to prevent localhost from connecting (unusual). The firewall does not prevent a local program from running or binding to a listening port.
Netstat has nothing to do with this. Netstat reports which ports are Listening, Established, SYN_Received, etc. The firewall does nothing to prevent local programs from listening on ports on any interface.

Mongodb no route to host while running mongo on a local machine

I have installed MongoDB on a local machine by following this tutorial and this one as well. I used my local user (using sudo in all commands) and then I do:
sudo service mongod start
It says start: Job is already running: mongod. Then when I run this command
sudo mongo
I get
MongoDB shell version: 2.6.0
connecting to: test
2014-07-08T12:33:40.360+0200 warning: Failed to connect to 127.0.0.1:27017, reason: errno:113 No route to host
2014-07-08T12:33:40.361+0200 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
THis is also the output of netstat -tpln
(No info could be read for "-p": geteuid()=1000 but you should be root.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp
0 0 0.0.0.0:27017 0.0.0.0:* LISTEN -
Also this is the output of sudo /sbin/iptables -L -n
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5432
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:8080
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:8443
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:3306
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 255
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
ACCEPT tcp -- 127.0.0.1 0.0.0.0/0 tcp dpt:27017 state NEW,ESTABLISHED
ACCEPT tcp -- 127.0.0.1 0.0.0.0/0 tcp dpt:27017 state NEW,ESTABLISHED
ACCEPT tcp -- 127.0.0.1 0.0.0.0/0 tcp dpt:27017 state NEW,ESTABLISHED
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 127.0.0.1 tcp spt:27017 state ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 127.0.0.1 tcp spt:27017 state ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 127.0.0.1 tcp spt:27017 state ESTABLISHED
I have followed several proposed solutions and never worked. Any suggestions?
I have followed several proposed solutions and never worked. Any suggestions?
This is most likely a firewall issue in your distro. Based on the output from iptables the mongod process is there listening to 27017 port but you need to get rid of this firewall rule:
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
This seems to cause of the problem. To find out about it, flush the rules in iptables (-F) and/or disabling ufw in ubuntu may solve the issue.