ZAP cli reporting different results - owasp

Why [INFO] shows Issues found: 0 while the report states otherwise?
Just to be sure I did restart zap proxy as well as changed API key and run everything inside docker.
And here is the output from the console:
gauntlt#724fe0361390:/working$ zap-cli --api-key=123 shutdown
[INFO] Shutting down ZAP daemon
gauntlt#724fe0361390:/working$ zap-cli --api-key=123 status
[ERROR] ZAP is not running
gauntlt#724fe0361390:/working$ zap-cli start -o '-config api.key=123'
[INFO] Starting ZAP daemon
gauntlt#724fe0361390:/working$ zap-cli --api-key=123 status
[INFO] ZAP is running
gauntlt#724fe0361390:/working$ zap-cli --api-key=123 report -o /working/output/report.md -f md; cat output/report.md
[INFO] Report saved to "/working/output/report.md"
# ZAP Scanning Report
## Summary of Alerts
| Risk Level | Number of Alerts |
| --- | --- |
| High | 0 |
| Medium | 0 |
| Low | 0 |
| Informational | 0 |
## Alert Detail
gauntlt#724fe0361390:/working$ zap-cli --api-key=123 quick-scan -o '-config scanner.attackOnStart=true -config view.mode=attack -config connection.dnsTtlSuccessfulQueries=-1 -config api.addrs.addr.name=.* -config api.addrs.addr.regex=true' -s xss,sqli --spider --recursive http://127.0.0.1:9009
[INFO] Running a quick scan for http://127.0.0.1:9009
[INFO] Issues found: 0
gauntlt#724fe0361390:/working$ zap-cli --api-key=123 report -o /working/output/report.md -f md; head output/report.md [INFO] Report saved to "/working/output/report.md"
# ZAP Scanning Report
## Summary of Alerts
| Risk Level | Number of Alerts |
| --- | --- |
gauntlt#724fe0361390:/working$ zap-cli --api-key=123 report -o /working/output/report.md -f md; head -20 output/report.md
[INFO] Report saved to "/working/output/report.md"
# ZAP Scanning Report
## Summary of Alerts
| Risk Level | Number of Alerts |
| --- | --- |
| High | 0 |
| Medium | 1 |
| Low | 3 |
| Informational | 0 |

did you resolved your issue?
I had the same issue,
I did a cat on the zap.log and I found that my ip address was not allowed, I added it to the allowed ip address on the setting > API > allow IPs and than it works for me, otherwise you need to check you firewall connection with telnet ...

Related

Postgres Register Standby fails

I am trying to setup a Primary and a standby using repmgr. I think I have successfully setup master, but standby setup keeps failing.
On Standby node
/usr/pgsql-12/bin/repmgr -h master_ip standby clone
NOTICE: destination directory "/var/lib/pgsql/12/data" provided
INFO: connecting to source node
DETAIL: connection string is: host=master_ip
DETAIL: current installation size is 32 MB
ERROR: repmgr extension is available but not installed in database "(null)"
HINT: check that you are cloning from the database where "repmgr" is installed
On Master Node:
/usr/pgsql-12/bin/repmgr cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-------------+---------+-----------+----------+----------+----------+----------+----------------------------------------------------------------
1 | hostname | primary | * running | | default | 100 | 1 | host=master_ip dbname=repmgr user=repmgr connect_timeout=2
postgres=# SELECT * FROM pg_available_extensions WHERE name='repmgr';
name | default_version | installed_version | comment
--------+-----------------+-------------------+------------------------------------
repmgr | 5.3 | | Replication manager for PostgreSQL
resolved after adding -U repmgr -d repmgr to the clone command.

using pgpool, i got empty value in replication state

I'm trying to use pgpool to postgres HA.
node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_bala
nce_node | replication_delay | replication_state | replication_sync_state | last_status_change
---------+----------+------+--------+-----------+-----------+---------+---------+------------+----------
---------+-------------------+-------------------+------------------------+---------------------
0 | master | 5432 | up | up | 0.500000 | primary | primary | 1 | false
| 0 | | | 2022-05-30 10:33:21
1 | slave | 5432 | up | up | 0.500000 | standby | primary | 0 | true
| 419431440 | | | 2022-05-30 10:33:21
In this process, other process is working well, but I got empty value replictation_state and replication_sync_state.
And I got high value in replication_delay.
Why those values are empty and high value?
Is there should change values in postgres.conf or pgpool.conf for replication?
In this case, I used 'pg_basebackup -h host -U Repuser -p port -D dir -X stream' for slave
this is pcp_node_info's result
master 5432 2 0.500000 up up primary primary 0 none none 2022-05-30 10:42:40
slave 5432 2 0.500000 up up standby primary 419431848 none none 2022-05-30 10:42:40
Sorry to my English Level, Thank you for your help
My version
postgres 14.2
pgpool 4.3.1
You need to provide application_name in both configurations files - myrecovery.conf (primary_conninfo variable) and pgpool.conf for each node.
Also you should check recovery_1st_stage and follow_primary.sh files as there you also find block with application_name. Script are used by pgpool to recover replica (with pcp_recover_node) or promote new master.
After all you can check current value with "select * from pg_stat_replication;" (on master) or "select * from pg_stat_wal_receiver;" (on replica)
More information: https://www.pgpool.net/docs/pgpool-II-4.3.1/en/html/example-cluster.html

pgpool/postgres - replication_delay is too high, how to reset?

in our setup the show pool_nodes shows a very high replication_delay and it keeps increasing, becuase of which any new queries are not replicated in the slave
following is the output of show pool_nodes command, is there a way to reset this, data loss if fine as this is not a live/production system.
[root#DB2 ~]# psql -h DB-HA-Hostname -U postgres -p 5432 -c 'show pool_nodes'
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | last_status_change
---------+--------------------------------------+------+--------+-----------+---------+------------+-------------------+-------------------+---------------------
0 | DB1-hostname | 5432 | up | 0.500000 | primary | 0 | true | 0 | 2021-01-11 19:32:00
1 | DB2-hostname | 5432 | up | 0.500000 | standby | 0 | false | 54986528 | 2021-01-11 19:32:00
(2 rows)
I have tried, restarting nodes, restarting pgpool, restarting postgresql , deleting database etc.. but no luck. As soon as the slave gets attached the replication_delay is high again..
You can run this command to check status of replication:
psql -h DB-HA-Hostname -U postgres -p 5432 -c "select * from pg_stat_replication" -x
if it shows:
if not, the configuration has failed.
You can show your configuration ?
Check the replication is running or not if it's not running re-configure the standby then attached the nodes
select * from pg_stat_replication;
after taking basebackup start the postgresql server then pcp-attach-node on pgpool

repmgr - how to make previous Primary to become a standby after failover

After performing a fail over, I had the previous Primary down, and the old standby became the Primary, as expected.
$ repmgr -f /etc/repmgr.conf cluster show --compact
ID | Name | Role | Status | Upstream | Location | Prio. | TLI
----+-----------------+---------+-----------+----------+----------+-------+-----
1 | server1 | primary | - failed | | default | 100 | ?
2 | server2 | primary | * running | | default | 100 | 2
3 | PG-Node-Witness | witness | * running | server2 | default | 0 | 1
I would like to make the old Primary join the cluster as a standby.
I gather the rejoin command should do that.
However, when I try to rejoin it, to be the new standby, I get this (I run this on the old Primary which is down ):
repmgr -f /etc/repmgr.conf -d 'host=10.9.7.97 user=repmgr dbname=repmgr' node rejoin
--where 10.9.7.97 is the ip of node I am running from.
I get this error:
$ repmgr -f /etc/repmgr.conf -d 'host=10.97.7.97 user=repmgr dbname=repmgr' node rejoin --verbose -
NOTICE: using provided configuration file "/etc/repmgr.conf"
ERROR: connection to database failed
DETAIL:
could not connect to server: Connection refused
Is the server running on host "10.97.7.97" and accepting
TCP/IP connections on port 5432?
Of course postgres is down on 10.9.7.97 - the old primary.
If I start it however, it starts as another primary:
$ repmgr -f /etc/repmgr.conf cluster show --compact
ID | Name | Role | Status | Upstream | Location | Prio. | TLI
----+-----------------+---------+-----------+----------+----------+-------+-----
1 | server1 | primary | ! running | | default | 100 | 1
2 | server2 | primary | * running | | default | 100 | 2
3 | PG-Node-Witness | witness | * running | server2 | default | 0 | 1
so what is the way to make the old primary the new standby...?
Thanks
Apparently the
-d 'host=
in the rejoin command, should specify the current Primary (previous standby).

Set binlog for Mysql 5.6 & Ubuntu16.4

in /etc/mysql/my.cnf:
[mysqld]
log_bin=mysql-bin
binlog_format=ROW
server-id=11
then on mysql client:
mysql> show variables like '%binlog%';
+-----------------------------------------+----------------------+
| Variable_name | Value |
+-----------------------------------------+----------------------+
| binlog_cache_size | 32768 |
| binlog_checksum | CRC32 |
| binlog_direct_non_transactional_updates | OFF |
| binlog_format | STATEMENT |
| binlog_max_flush_queue_time | 0 |
| binlog_order_commits | ON |
| binlog_row_image | FULL |
| binlog_rows_query_log_events | OFF |
| binlog_stmt_cache_size | 32768 |
| innodb_api_enable_binlog | OFF |
| innodb_locks_unsafe_for_binlog | OFF |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 1073741824 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| sync_binlog | 0 |
+-----------------------------------------+----------------------+
15 rows in set (0.00 sec)
I have googled "my.cnf dont take effect" and got "Changes to my.cnf don't take effect (Ubuntu 16.04, mysql 5.6)"
Changes to my.cnf don't take effect (Ubuntu 16.04, mysql 5.6)
I have tested the two answers. But when I start mysql with sudo service mysql start, there always be this error:
/etc/init.d/mysql[27197]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in
/etc/init.d/mysql[27197]: [61B blob data]
/etc/init.d/mysql[27197]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
/etc/init.d/mysql[27197]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
/etc/init.d/mysql[27197]:
mysql[26527]: ...fail!
systemd[1]: mysql.service: Control process exited, code=exited status=1
systemd[1]: Failed to start LSB: Start and stop the mysql database server daemon.
I have spent two days searching answers, and still not find nothing helpful. Could any one please help me out? Thank you!
Reinstall MySQL
Clean MySQL according to :
https://askubuntu.com/questions/172514/how-do-i-uninstall-mysql
sudo apt-get purge mysql-server mysql-client mysql-common mysql-server-core-* mysql-client-core-*
sudo rm -rf /etc/mysql /var/lib/mysql
sudo apt-get autoremove
sudo apt-get autoclean
as well as https://www.jianshu.com/p/c76b31df5d09
dpkg -l |grep ^rc|awk '{print $2}' |sudo xargs dpkg -P
Install by sudo apt-get install mysql-server-5.6
Make changes in my.cnf take effect
Second answer by Keeth(user:20588) on
Changes to my.cnf don't take effect (Ubuntu 16.04, mysql 5.6)
Configure binlog by editing /etc/mysql/my.cnf
[mysqld]
log_bin=mysql-bin
binlog_format=ROW
server-id=11