openstack devstack installation stops with Connecting Dashboard Url - redhat

I was facing the below errors in OpenStack installation. Problem in connecting the dashboard URL.
Changed the url in openrc file by removing version like below http://100.1.201.102:5000/identity.
But still facing the same below issue. please let me know where exactly the error is being thrown .
INFO keystone.cmd.cli [req-412f0ceb-6b28-0bb1-b2scc-aae69e76ff7b - - - - -] Created domain default
INFO keystone.cmd.cli [req-412f0ceb-6b28-0bb1-b2scc-aae69e76ff7b - - - - -] Created project admin
DEBUG passlib.registry [req-412f0ceb-6b28-0bb1-b2scc-aae69e76ff7b - - - - -] registered 'sha512_crypt' handler: <class 'passlib.handlers.sha2_crypt.sha512_crypt'> register_crypt_handler /usr/lib/python2.7/site-packages/passlib/registry.py:284
INFO keystone.cmd.cli [req-412f0ceb-6b28-0bb1-b2scc-aae69e76ff7b - - - - -] Created user admin
INFO keystone.cmd.cli [req-412f0ceb-6b28-4bb1-b2cc-aae69e76ff7b - - - - -] Created role admin
INFO keystone.cmd.cli [req-412f0ceb-6b28-4bb1-b2cc-aae69e76ff7b - - - - -] Granted admin on admin to user admin.
INFO keystone.cmd.cli [req-412f0ceb-6b28-4bb1-b2cc-aae69e76ff7b - - - - -] Created region RegionOne
INFO keystone.cmd.cli [req-412f0ceb-6b28-4bb1-b2cc-aae69e76ff7b - - - - -] Created admin endpoint http://100.1.201.102/identity_v2_admin
INFO keystone.cmd.cli [req-412f0ceb-6b28-4bb1-b2cc-aae69e76ff7b - - - - -] Created internal endpoint http://100.1.201.102/identity
INFO keystone.cmd.cli [req-412f0ceb-6b28-4bb1-b2cc-aae69e76ff7b - - - - -] Created public endpoint http://100.1.201.102/identity
2016-09-16 10:39:01.969 | +./stack.sh:main:1038 cat
2016-09-16 10:39:01.998 | +./stack.sh:main:1053 is_service_enabled tls-proxy
2016-09-16 10:39:02.055 | +functions-common:is_service_enabled:2078 return 1
2016-09-16 10:39:02.064 | +./stack.sh:main:1057 source /devstack/userrc_early
2016-09-16 10:39:02.075 | ++userrc_early:source:4 export OS_IDENTITY_API_VERSION=3
2016-09-16 10:39:02.087 | ++userrc_early:source:4 OS_IDENTITY_API_VERSION=3
2016-09-16 10:39:02.098 | ++userrc_early:source:5 export OS_AUTH_URL=http://100.1.201.102/identity_v2_admin
2016-09-16 10:39:02.109 | ++userrc_early:source:5 OS_AUTH_URL=http://100.1.201.102/identity_v2_admin
2016-09-16 10:39:02.121 | ++userrc_early:source:6 export OS_USERNAME=admin
2016-09-16 10:39:02.133 | ++userrc_early:source:6 OS_USERNAME=admin
2016-09-16 10:39:02.144 | ++userrc_early:source:8 export OS_PASSWORD=secret
2016-09-16 10:39:02.156 | ++userrc_early:source:8 OS_PASSWORD=secret
2016-09-16 10:39:02.168 | ++userrc_early:source:9 export OS_PROJECT_NAME=admin
2016-09-16 10:39:02.181 | ++userrc_early:source:9 OS_PROJECT_NAME=admin
2016-09-16 10:39:02.192 | ++userrc_early:source:11 export OS_REGION_NAME=RegionOne
2016-09-16 10:39:02.204 | ++userrc_early:source:11 OS_REGION_NAME=RegionOne
2016-09-16 10:39:02.214 | +./stack.sh:main:1059 create_keystone_accounts
2016-09-16 10:39:02.223 | +lib/keystone:create_keystone_accounts:384 local admin_project
2016-09-16 10:39:02.235 | ++lib/keystone:create_keystone_accounts:385 openstack project show admin -f value -c id
2016-09-16 10:40:19.898 | Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
2016-09-16 10:40:19.898 | Could not determine a suitable URL for the plugin
2016-09-16 10:40:19.981 | +lib/keystone:create_keystone_accounts:385 admin_project=
2016-09-16 10:40:19.992 | +lib/keystone:create_keystone_accounts:1 exit_trap
2016-09-16 10:40:20.003 | +./stack.sh:exit_trap:487 local r=1
2016-09-16 10:40:20.014 | ++./stack.sh:exit_trap:488 jobs -p
2016-09-16 10:40:20.025 | +./stack.sh:exit_trap:488 jobs=
2016-09-16 10:40:20.036 | +./stack.sh:exit_trap:491 [[ -n '' ]]
2016-09-16 10:40:20.047 | +./stack.sh:exit_trap:497 kill_spinner
2016-09-16 10:40:20.059 | +./stack.sh:kill_spinner:383 '[' '!' -z '' ']'
2016-09-16 10:40:20.069 | +./stack.sh:exit_trap:499 [[ 1 -ne 0 ]]
2016-09-16 10:40:20.080 | +./stack.sh:exit_trap:500 echo 'Error on exit'
2016-09-16 10:40:20.080 | Error on exit
2016-09-16 10:40:20.093 | +./stack.sh:exit_trap:501 generate-subunit 1474021993 427 fail
2016-09-16 10:40:20.697 | +./stack.sh:exit_trap:502 [[ -z /opt/stack/logs ]]
2016-09-16 10:40:20.703 | +./stack.sh:exit_trap:505 devstack/tools/worlddump.py -d /opt/stack/logs
2016-09-16 10:40:21.394 | +./stack.sh:exit_trap:511 exit 1

Related

Exclude kubernetes namespaces from metric collection by datadog agent

After installing datadog chart (version=3.1.3) on EKS, I need to limit some of namespaces from metric collection. I'm using containerExclude to limit the namespace scope. The values I'm using is as follows:
datadog:
site: datadoghq.eu
clusterName: eks-test
kubeStateMetricsEnabled: false
kubeStateMetricsCore:
enabled: false
containerExclude: "kube_namespace:astronomer kube_namespace:astronomer-.* kube_namespace:kube-system kube_namespace:kube-public kube_namespace:kube-node-lease"
prometheusScrape:
enabled: true
serviceEndpoints: true
additionalConfigs:
- autodiscovery:
kubernetes_annotations:
include:
prometheus.io/scrape: "true"
exclude:
prometheus.io/scrape: "false"
clusterAgent:
enabled: true
metricsProvider:
enabled: false
agents:
enabled: true
Looking at the pod environment variables, I see this is being passed to agent correctly:
DD_CONTAINER_EXCLUDE: kube_namespace:astronomer kube_namespace:astronomer-.* kube_namespace:kube-system kube_namespace:kube-public kube_namespace:kube-node-lease
However I see see scrape logs from those namespaces:
2022-09-26 13:48:38 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:f744b75c375b067b | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.79.111:9102/metrics
2022-09-26 13:48:38 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:c52e8d14335bb33d | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.22.119:9102/metrics
2022-09-26 13:48:39 UTC | CORE | ERROR | (pkg/collector/python/datadog_agent.go:123 in LogMessage) | openmetrics:ba505488f569ffa0 | (base.py:66) | There was an error scraping endpoint http://172.20.0.10:9153/metrics: HTTPConnectionPool(host='172.20.0.10', port=9153): Max retries exceeded with url: /metrics (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7fd271df8190>, 'Connection to 172.20.0.10 timed out. (connect timeout=10.0)'))
2022-09-26 13:48:39 UTC | CORE | ERROR | (pkg/collector/worker/check_logger.go:69 in Error) | check:openmetrics | Error running check: [{"message": "There was an error scraping endpoint http://172.20.0.10:9153/metrics: HTTPConnectionPool(host='172.20.0.10', port=9153): Max retries exceeded with url: /metrics (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7fd271df8190>, 'Connection to 172.20.0.10 timed out. (connect timeout=10.0)'))", "traceback": "Traceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 1116, in run\n self.check(instance)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/openmetrics/v2/base.py\", line 67, in check\n raise_from(type(e)(\"There was an error scraping endpoint {}: {}\".format(endpoint, e)), None)\n File \"<string>\", line 3, in raise_from\nrequests.exceptions.ConnectTimeout: There was an error scraping endpoint http://172.20.0.10:9153/metrics: HTTPConnectionPool(host='172.20.0.10', port=9153): Max retries exceeded with url: /metrics (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7fd271df8190>, 'Connection to 172.20.0.10 timed out. (connect timeout=10.0)'))\n"}]
2022-09-26 13:48:39 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:8e74b74102a3722 | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.129.200:9102/metrics
2022-09-26 13:48:39 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:e6012834c5d9bc2e | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.19.241:9127/metrics
2022-09-26 13:48:39 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:ba98e825c73ee1b4 | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.64.136:9102/metrics
Those services belong to kube-system which is excluded. But I see that these metrics belong to Prometheus [Openmetrics]. Do I have to apply similar setting in prometheusScrape.additionalConfigs?

Using sql query in shell script

I have a shell script that uses output from a sql query and based on the value of one column sends out an alert. However i don't think it's capturing the value. although the value is not greater than 0 yet it still sends out an email.
Any idea where i am going wrong? Thanks.
............................................................................
#!/bin/sh
psql -d postgres -U postgres -c "select pid,application_name,pg_wal_lsn_diff(pg_current_wal_lsn(), sent_lsn) sending_lag,pg_wal_lsn_diff(sent_lsn,flush_lsn) receiving_lag,pg_wal_lsn_diff(flush_lsn, replay_lsn) replaying_lag,pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) total_lag from pg_stat_replication;"| while read total_lag;
do
echo $total_lag
lag1=$(echo $total_lag)
done
if [[ $lag1 -ge 0 ]]
then echo "Current replication lag is $lag1" |mail -s "WARNING!" abcd#mail.com
else
echo "No issue"
fi
............................................................................
this is the output of above query
pid | application_name | sending_lag | receiving_lag | replaying_lag | total_lag
-------+------------------+-------------+---------------+---------------+-----------
27823 | db123 | 0 | 0 | 0 | 0
27824 | db023 | 0 | 0 | 0 | 0

How to preserve new line character while performing psql copy command

I have following content in my csv file(with 3 columns):
141413,"\"'/x=/></script></title><x><x/","Mountain View, CA\"'/x=/></script></title><x><x/"
148443,"CLICK LINK BELOW TO ENTER^^^^^^^^^^^^^^","model\
\
xxx lipsum as it is\
\
100 sometimes unknown\
\
travel evening market\
"
When I import above mentioned csv in mysql using following command, it treats the backslash() as new line; which is the expected behavior.
LOAD DATA INFILE '1.csv' INTO TABLE users FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\n';
MYSQL Output
But when I try to import to psql using copy command, it treats \ as a normal character.
copy users from '1.csv' WITH (FORMAT csv, DELIMITER ',', ENCODING 'utf8', NULL "\N", QUOTE E'\"', ESCAPE '\');
postgres Output
Try parsing these \ before importing the CSV file, e.g. using perl -pe or sed and the STDIN from psql:
$ cat 1.csv | perl -pe 's/\\\n/\n/g' | psql testdb -c "COPY users FROM STDIN WITH (FORMAT csv, DELIMITER ',', ENCODING 'utf8', NULL "\N", QUOTE E'\"', ESCAPE '\');"
This is how it looks like after the import:
testdb=# select * from users;
id | company | location
--------+-----------------------------------------+-------------------------------------------------
141413 | "'/x=/></script></title><x><x/ | Mountain View, CA"'/x=/></script></title><x><x/
148443 | CLICK LINK BELOW TO ENTER^^^^^^^^^^^^^^ | model +
| | +
| | xxx lipsum as it is +
| | +
| | 100 sometimes unknown +
| | +
| | travel evening market +
| |
(2 Zeilen)

Postgresql : can't connect with new created users

Here's the error :
$ psql -h localhost -U kMbjQ6pR9G -d fzvqFILx0d
Password for user kMbjQ6pR9G:
psql: FATAL: password authentication failed for user "kMbjQ6pR9G"
FATAL: password authentication failed for user "kMbjQ6pR9G"
I'm probably missing a configuration step on using fresh PostgreSQL.
Here's the command lines I tried to create a new user with his own database :
sudo -u postgres bash -c "psql -c \"CREATE USER $USER WITH PASSWORD '$PASSWORD';\"" &&
sudo -u postgres bash -c "psql -c \"CREATE DATABASE $DB;\"" &&
sudo -u postgres bash -c "psql -c \"GRANT ALL PRIVILEGES ON DATABASE $DB to $USER;\"" &&
Here's the PostgreSQL configuration :
$ sudo grep -v ^# /etc/postgresql/9.5/main/pg_hba.conf |grep -v ^$
local all postgres md5
local all all md5
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
Here's all the accound I created to test :
$ sudo -u postgres bash -c 'psql -c "select * from pg_shadow;"'
Password:
usename | usesysid | usecreatedb | usesuper | userepl | usebypassrls | passwd | valuntil | useconfig
------------+----------+-------------+----------+---------+--------------+-------------------------------------+----------+-----------
av6izmbp9l | 16384 | f | f | f | f | md5a49721ef3f5428e269badc03931baf48 | |
rqmejchq7n | 16386 | f | f | f | f | md54edf3a05ca96a435f94152b75495e9dc | |
yyfiknesu8 | 16388 | f | f | f | f | md5b3d4a03913569abbf318bdc490d0f821 | |
qgv2ryqvdw | 16390 | f | f | f | f | md5d0959b4b5e1ed2982e19e4d0af574b11 | |
postgres | 10 | t | t | t | t | md5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | |
pf09joszuj | 16392 | f | f | f | f | md5a920dc31666459e5f0a96e9430d07f02 | |
I think it something simple, but I miss this point.
Thanks for your support,
David.
Did you explicitly specify the database when trying to connect?
psql -h localhost -U myuser -d mydb
This is how I usually set up a user to completely own a given database:
create role myuser with createdb login encrypted password 'mypassword';
create database mydb with owner 'myuser' encoding 'utf8';
pg_hba.conf:
local myuser mydb md5
...
The problem was obvious : username & database names are lowercase !
Here's the script I use to create user and his own database :
#!/bin/bash
if [ -n "$1" ]; then
DB="$1"
else
DB=$(php -r "echo substr(str_shuffle(str_repeat('abcdefghijklmnopqrstuvwxyz', ceil(10/63) )),1,10);")
fi
USER=$(php -r "echo substr(str_shuffle(str_repeat('abcdefghijklmnopqrstuvwxyz', ceil(10/63) )),1,10);")
PASSWORD=$(php -r "echo substr(str_shuffle(str_repeat('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ', ceil(20/63) )),1,20);")
sudo -u postgres psql -c "CREATE USER $USER WITH PASSWORD '$PASSWORD';"
sudo -u postgres psql -c "CREATE DATABASE $DB;"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE $DB to $USER;"
cat << EndOfMessage
POSTGRESQL_ADDON_DB="$DB"
POSTGRESQL_ADDON_HOST="localhost"
POSTGRESQL_ADDON_PASSWORD="$PASSWORD"
POSTGRESQL_ADDON_PORT="5432"
POSTGRESQL_ADDON_USER="$USER"
EndOfMessage
https://github.com/davidbouche/linux-install/blob/master/pgsql-database-create.sh
Thanks for your contribution.
David

plugin ":mail:1.0.7" has syntax error on execution

mail {
host = "smtp.1und1.de"
port = 465
username = "pt8100853-1"
password = "xxxxxxxx"
props = ["mail.smtp.auth":"true",
"mail.smtp.socketFactory.port":"465",
"mail.smtp.socketFactory.class":"javax.net.ssl.SSLSocketFactory",
"mail.smtp.socketFactory.fallback":"false"]
}
is the contents of config groovy. My controller contains
sendMail {
to "gaby#strotmann.org"
subject "Hello Fred"
body 'How are you?'
}
After activating sendMail I get:
Error 2015-02-25 21:19:25,883 [http-bio-8080-exec-8] ERROR errors.GrailsExceptionResolver - MailSendException occurred when processing request: [GET] /Partner/kommunikation/eMail
Failed messages: com.sun.mail.smtp.SMTPSendFailedException: 501 Syntax error in parameters or arguments
;
nested exception is:
com.sun.mail.smtp.SMTPSenderFailedException: 501 Syntax error in parameters or arguments
. Stacktrace follows:
Message: Failed messages: com.sun.mail.smtp.SMTPSendFailedException: 501 Syntax error in parameters or arguments
;
nested exception is:
com.sun.mail.smtp.SMTPSenderFailedException: 501 Syntax error in parameters or arguments
Line | Method
->> 131 | sendMessage in grails.plugin.mail.MailMessageBuilder
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 55 | sendMail in grails.plugin.mail.MailService
| 59 | sendMail . in ''
| 165 | doCall in MailGrailsPlugin$_configureSendMail_closure6
| 37 | eMail . . . in org.strotmann.partner.KommunikationController
| 198 | doFilter in grails.plugin.cache.web.filter.PageFragmentCachingFilter
| 63 | doFilter . in grails.plugin.cache.web.filter.AbstractFilter
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 745 | run in java.lang.Thread
I'm somewhat confused to get a syntax error, as I copied the original code directly from the plugin documentation.
How can I escape that problem?