Xdebug not working in centos7, port 9000 not working - centos

I want to work with xdebug and PhpStorm, but xdebug port not listen in 9000.
How can I connect xdebug and PhpStorm?
Here is my phpinfo().
PHP Version 5.5.24
System Linux localhost.localdomain 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar
xdebug support enabled
Version 2.3.2
IDE Key phpStorm
Directive Local Value Master Value
xdebug.auto_trace Off Off
xdebug.cli_color 0 0
xdebug.collect_assignments Off Off
xdebug.collect_includes On On
xdebug.collect_params 0 0
xdebug.collect_return Off Off
xdebug.collect_vars Off Off
xdebug.coverage_enable On On
xdebug.default_enable On On
xdebug.dump.COOKIE no value no value
xdebug.dump.ENV no value no value
xdebug.dump.FILES no value no value
xdebug.dump.GET no value no value
xdebug.dump.POST no value no value
xdebug.dump.REQUEST no value no value
xdebug.dump.SERVER no value no value
xdebug.dump.SESSION no value no value
xdebug.dump_globals On On
xdebug.dump_once On On
xdebug.dump_undefined Off Off
xdebug.extended_info On On
xdebug.file_link_format no value no value
xdebug.force_display_errors Off Off
xdebug.force_error_reporting 0 0
xdebug.halt_level 0 0
xdebug.idekey phpStorm phpStorm
xdebug.max_nesting_level 256 256
xdebug.max_stack_frames -1 -1
xdebug.overload_var_dump On On
xdebug.profiler_aggregate Off Off
xdebug.profiler_append Off Off
xdebug.profiler_enable Off Off
xdebug.profiler_enable_trigger Off Off
xdebug.profiler_enable_trigger_value no value no value
xdebug.profiler_output_dir /tmp /tmp
xdebug.profiler_output_name cachegrind.out.%p cachegrind.out.%p
xdebug.remote_autostart On On
xdebug.remote_connect_back Off Off
xdebug.remote_cookie_expire_time 3600 3600
xdebug.remote_enable On On
xdebug.remote_handler dbgp dbgp
xdebug.remote_host localhost localhost
xdebug.remote_log no value no value
xdebug.remote_mode req req
xdebug.remote_port 9000 9000
xdebug.scream Off Off
xdebug.show_exception_trace Off Off
xdebug.show_local_vars Off Off
xdebug.show_mem_delta Off Off
xdebug.trace_enable_trigger Off Off
xdebug.trace_enable_trigger_value no value no value
xdebug.trace_format 0 0
xdebug.trace_options 0 0
xdebug.trace_output_dir /tmp /tmp
xdebug.trace_output_name trace.%c trace.%c
xdebug.var_display_max_children 128 128
xdebug.var_display_max_data 512 512
xdebug.var_display_max_depth 3 3
In CentOS I run these commands:
[root#localhost xdebug-2.3.2]# netstat -nan | grep 9000
[root#localhost xdebug-2.3.2]#
but 9000 port not listen anything.
Why port not working? How can I fix this?

Related

NOSRV errors seen in haproxy logs

We have haproxy in front of 2 apache servers and every day for less than a minute I am getting NOSRV errors in haproxy logs. There are successful requests from the source IP so this is just intermittent. There is no entry of any error in the backend logs.
Below is the snippet from access logs:
Dec 22 20:21:25 proxy01 haproxy[3000561]: X.X.X.X:60872 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:43212 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:43206 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:60974 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32772 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 103 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32774 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 59 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32776 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 57 0
below is the HAproxy config file:
defaults
log global
timeout connect 15000
timeout check 5000
timeout client 30000
timeout server 30000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend Local_Server
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/
mode http
option httplog
cookie SRVNAME insert indirect nocache maxidle 8h maxlife 8h
#capture request header X-Forwarded-For len 15
#capture request header Host len 32
http-request capture req.hdrs len 512
log-format "%ci:%cp[%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
#log-format "%ci:%cp %ft %b/%s %Tw/%Tc/%Tr/ %ST %B %rc %bq %hr %hs %{+Q}r %Tt %Ta"
option dontlognull
option http-keep-alive
#declare whitelists for urls
acl xx_whitelist src -f /etc/haproxy/xx_whitelist.lst
acl is-blocked-ip src -f /etc/haproxy/badactors-list.txt
http-request silent-drop if is-blocked-ip
acl all src 0.0.0.0
######### ANTI BAD GUYS STUFF ###########################################
#anti DDOS sticktable - sends a 500 after 5s when requests from IP over 120 per
#frontend for stick table see backend "st_src_global" also
#Restrict number of requests in last 10 secs
# TO MONTOR RUN " watch -n 1 'echo "show table st_src_global" | socat unix:/run/haproxy/admin.sock -' " ON CLI.
#ZZZ THIS MAY NEED DISABLEING FOR LOAD TESTS ZZZZ
# Table definition
http-request track-sc0 src table st_src_global #<- defines tracking stick table
stick-table type ip size 100k expire 10s store http_req_rate(50000s) #<- sets the limit for and time to store IP
http-request silent-drop if { sc_http_req_rate(0) gt 50000 } # drops if requests are greater the 5000 in 5 secs
# Allow clean known IPs to bypass the filter
tcp-request connection accept if { src -f /etc/haproxy/xx_whitelist.lst }
#Slowlorris protection -send 408 if http request not completed in 5secs
timeout http-request 10s
option http-buffer-request
# Block Specific Requests
#http-request deny if HTTP_1.0
http-request deny if { req.hdr(user-agent) -i -m sub phantomjs slimerjs }
#traffic shape
#xxxx.xxxx.xx.xx
acl xxxxx.xxxxx.xx.xx hdr(host) -i xxxx.xxxx.xx.xx
use_backend xxxx.xxxx.xx.xx if xxxx.xxxx.xx.xx xx_whitelist #update from proxys
#sticktable for dos protection
backend st_src_global
stick-table type ip size 1m expire 10s store http_req_rate(50000s)
backend xxxxxxx.xxxxx.xx.xx
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server web01-http x.x.x.x:80 check maxconn 100
server web03-http x.x.x.x.:80 check maxconn 100

cURL fails in GitHub Actions

I'm running a Raspberry Pi 4 as a server for a .NET Core side-project of mine. Nothing too fancy or heavy. After trying to get going with a webhook and uploading files with scp to the Pi and failed (still don't know why at this point; scp problem might be the same as cURL problem), I decided to make myself a small API which accepts a file and deploys it to the specified path. The API is working both from inside and outside the Pi as I've tested it using cURL and Postman with a 20MB zip file, but when I run this command from inside a GitHub Action, I get a long waiting time and then a fail message.
Command:
curl --request POST --url https://example.com/ --header 'cache-control: no-cache' --form path=DEPLOY_PATH --form archive=#FILE_PATH --form token=TOKEN
Output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 8776k 0 0 0 65536 0 42750 0:03:30 0:00:01 0:03:29 42722
0 8776k 0 0 0 65536 0 25862 0:05:47 0:00:02 0:05:45 25852
0 8776k 0 0 0 65536 0 18533 0:08:04 0:00:03 0:08:01 18528
0 8776k 0 0 0 65536 0 14444 0:10:22 0:00:04 0:10:18 14441
0 8776k 0 0 0 65536 0 11833 0:12:39 0:00:05 0:12:34 13091
0 8776k 0 0 0 65536 0 10022 0:14:56 0:00:06 0:14:50 0
0 8776k 0 0 0 65536 0 8691 0:17:14 0:00:07 0:17:07 0
...
0 8776k 0 0 0 65536 0 63 39:37:37 0:17:11 39:20:26 0
0 8776k 0 0 0 65536 0 63 39:37:37 0:17:11 39:20:26 0
curl: (55) SSL_write() returned SYSCALL, errno = 110
##[error]Process completed with exit code 55.
With both scp and cURL commands there seems to be a common problem. If I try to send a simple text file or a tar.gz containing a text.file, it works. If I try to do the same with a .dll file or a tar.gz containing a .dll file, it does not. I don't really know if the problem is because of the files or their size. To be noted that the API accepts files as big as 100MB at the moment and I'm only trying to deploy a small package of ~10MB.
Output with -v arg:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying IP...
* TCP_NODELAY set
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to URL (IP) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* TLSv1.3 (IN), TLS handshake, Server hello (2):
{ [112 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [2861 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=URL
* start date: Sep 18 16:51:41 2020 GMT
* expire date: Dec 17 16:51:41 2020 GMT
* subjectAltName: host "URL" matched cert's "URL"
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
} [5 bytes data]
> POST /api/Deployment/ HTTP/1.1
> Host: URL
> User-Agent: curl/7.58.0
> Accept: */*
> cache-control: no-cache
> Content-Length: 3333
> Content-Type: multipart/form-data; boundary=------------------------ca1748c91973ca89
> Expect: 100-continue
>
{ [5 bytes data]
< HTTP/1.1 100 Continue
} [5 bytes data]
100 3333 0 0 100 3333 0 2154 0:00:01 0:00:01 --:--:-- 2153
100 3333 0 0 100 3333 0 1307 0:00:02 0:00:02 --:--:-- 1307
100 3333 0 0 100 3333 0 938 0:00:03 0:00:03 --:--:-- 938
100 3333 0 0 100 3333 0 732 0:00:04 0:00:04 --:--:-- 732
100 3333 0 0 100 3333 0 600 0:00:05 0:00:05 --:--:-- 614
100 3333 0 0 100 3333 0 508 0:00:06 0:00:06 --:--:-- 0
100 3333 0 0 100 3333 0 441 0:00:07 0:00:07 --:--:-- 0
...
100 3333 0 0 100 3333 0 57 0:00:58 0:00:57 0:00:01 0
100 3333 0 0 100 3333 0 56 0:00:59 0:00:58 0:00:01 0
100 3333 0 0 100 3333 0 55 0:01:00 0:00:59 0:00:01 0
100 3333 0 0 100 3333 0 54 0:01:01 0:01:00 0:00:01 0* Empty reply from server
100 3333 0 0 100 3333 0 54 0:01:01 0:01:00 0:00:01 0
* Connection #0 to host URL left intact
curl: (52) Empty reply from server
##[error]Process completed with exit code 52.
EDIT: Switching to a Windows runner instead of Ubuntu solved the cURL problem, but I'm still open to suggestions regarding this question, as this is merely a workaround rather than a solution.

Check value of PF_NO_SETAFFINITY

Is it possible to tell whether a process/thread has the PF_NO_SETAFFINITY flag set? I'm running taskset on a series of process ids and some are throwing errors of the following form:
taskset: failed to set pid 30's affinity: Invalid argument
I believe this is because some processes have PF_NO_SETAFFINITY set (see Answer).
Thank you!
Yes - look at /proc/PID/stat's 'flag' field
<linux/sched.h
#define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_allowed */
Look here for details on using /proc:
http://man7.org/linux/man-pages/man5/proc.5.html
https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk65143
Example:
ps -eaf
www-data 30084 19962 0 07:09 ? 00:00:00 /usr/sbin/apache2 -k start
...
cat /proc/30084/stat
30084 (apache2) S 19962 19962 19962 0 -1 4194624 554 0 3 0 0 0 0 0 20 0 1 0 298837672 509616128 5510 18446744073709551615 1 1 0 0 0 0 0 16781312 201346799 0 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
The flags are 4194624
Q: Do you mind specifying how you'd write a simple script that outputs
true/false based on whether you're allowed to set affinity?
A: I don't feel comfortable providing this without the opportunity to test, but you can try something like this...
flags=$(cut -f 9 -d ' ' /proc/30084/stat)
echo $(($flags & 0x40000000))

Apache (apache2) HTTP server stops accepting connections after a while and requires restart

The apache server uses up all of the servers (up to ServerLimit) and then does not accept any more connections.
Slot PID Stopping Connections Threads Async connections
total accepting busy idle writing keep-alive closing
0 23257 yes 1 no 0 0 0 0 0
1 27271 no 0 yes 1 24 0 0 0
2 24876 yes 2 no 0 0 0 0 0
3 23117 yes 2 no 0 0 0 0 0
4 22671 yes 1 no 0 0 0 0 0
5 23994 yes 1 no 0 0 0 0 0
6 25159 yes 1 no 0 0 0 0 0
7 24604 yes 1 no 0 0 0 0 0
Sum 8 7 9 1 24 0 0 0
The one pid that was accepting was killed and restarted to get the status report above. Over time this PID would also end up like the rest. How do I find out why Apache stops accepting connections after a while? The timeout is set at 90 seconds.
Additional information:
Server Version: Apache/2.4.33 (Unix) OpenSSL/1.0.2o
Server Built: Apr 18 2018 10:56:21
Server loaded APR Version: 1.6.3
Compiled with APR Version: 1.6.3
Server loaded APU Version: 1.6.1
Compiled with APU Version: 1.6.1
Module Magic Number: 20120211:76
Hostname/port: localhost:8006
Timeouts: connection: 90 keep-alive: 5
MPM Name: event
MPM Information: Max Daemons: 8 Threaded: yes Forked: yes
Server Architecture: 64-bit

Radius test only success in the local machine, but can't through remote machine

I install the freeradius in Ubuntu 10 through apt-get.
after make the server running. the local test is valid:
yozloy#SNDA-192-168-21-78:/usr/bin$ echo "User-Name=testuser,Password=123456" | radclient 127.0.0.1:1812 auth testing123 -x
Sending Access-Request of id 245 to 127.0.0.1 port 1812
User-Name = "testuser"
Password = "0054444944"
rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=245, length=20
But in the remote machine, it seems that there's no response from the radius server machine:
root#SNDA-192-168-14-131:/home/yozloy# echo "User-Name=testuser,Password=123456" | radclient 58.215.164.98:1812 auth testing123 -x
Sending Access-Request of id 36 to 58.215.164.98 port 1812
User-Name = "testuser"
Password = "0054444944"
Sending Access-Request of id 36 to 58.215.164.98 port 1812
User-Name = "testuser"
Password = "0054444944"
Sending Access-Request of id 36 to 58.215.164.98 port 1812
User-Name = "testuser"
Password = "0054444944"
radclient: no response from server for ID 36 socket 3
Here's my configure file:
clients.conf
client 58.215.164.98 {
ipaddr = 58.215.164.98
secret = testing123
require_message_authenticator = no
}
users
testuser CLeartext-Password := "0054444944"
update the configure file(I'm not actually change anything)
radiusd.conf
proxy_requests = yes
$INCLUDE proxy.conf
$INCLUDE clients.conf
thread pool {
start_servers = 5
max_servers = 32
min_spare_servers = 3
max_spare_servers = 10
max_requests_per_server = 0
}
modules {
$INCLUDE ${confdir}/modules/
$INCLUDE eap.conf
}
instantiate {
exec
expr
expiration
logintime
}
$INCLUDE policy.conf
$INCLUDE sites-enabled/
yozloy#SNDA-192-168-18-234:/etc/freeradius$ netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 192.168.18.234:22 123.5.13.20:3274 ESTABLISHED
tcp6 0 0 :::22 :::* LISTEN
udp 0 0 0.0.0.0:1812 0.0.0.0:*
udp 0 0 0.0.0.0:1813 0.0.0.0:*
udp 0 0 0.0.0.0:1814 0.0.0.0:*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
unix 4 [ ] DGRAM 2838 /dev/log
unix 2 [ ACC ] STREAM LISTENING 2166 #/com/ubuntu/upstart
unix 2 [ ] DGRAM 2272 #/org/kernel/udev/udevd
unix 3 [ ] STREAM CONNECTED 3351
unix 3 [ ] STREAM CONNECTED 3350
unix 2 [ ] DGRAM 3173
unix 2 [ ] DGRAM 2893
unix 3 [ ] DGRAM 2304
unix 3 [ ] DGRAM 2303
unix 3 [ ] STREAM CONNECTED 2256 #/com/ubuntu/upstart
unix 3 [ ] STREAM CONNECTED 2255
Correct me if I am wrong but, IP address of SNDA-192-168-14-131 against your RADIUS server (SNDA-192-168-21-78) is not 58.215.164.98, is it ?
If it is not, that is your answer. You RADIUS server will only work against NAS configured in clients.conf with correct secrets.
Try adding 192.168.14.131 (if it is that host's IP address) to clients.conf and try then.