reloading prometheus configuration file [duplicate] - kubernetes

I would like to install Prometheus on port 8080 instead of 9090 (its normal default). To this end I have edited /etc/systemd/system/prometheus.service to contain this line:
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus.yaml --web.enable-admin-api \
--web.listen-address=":8080"
I.e., I am using option --web.listen-address to specifiy the non-default port.
However, when I start Prometheus (2.0 beta) with systemctl start prometheus I receive this error message:
parse external URL "": invalid external URL "http://<myhost>:8080\"/"
So how can I configure Prometheus such that I can reach its web UI at http://<myhost>:8080/ (instead of http://<myhost>:9090)?

The quotes were superfluous. This line will work:
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus.yaml --web.enable-admin-api \
--web.listen-address=:8080

I'm using Ubuntu 20.02. It requires:
--web.listen-address=:8080 #defaults to IPv6
--web.listen-address=*:8080 # does not work
--web.listen-address=192.168.1.X:8080 # for IPv4

Related

How to use Swagger in Quarkus with Ingress-Ngnix Kubernetes

Good Afternoon. I'm trying to use Swagger in Quarkus and locally it works great for me, however when I upload it to the production environment I'm using Ingress-Nginx as a reverse proxy in a Kubernetes cluster and I'm running into a problem, I don't allows you to view the Swagger interface:
Postman Local:
Swaager Local:
Postman Kubernetes Environment with Ingress-Nginx:
Swaager-Ui in Kubernetes Environment with Ingress-Nginx:
My application.properties:
quarkus.datasource.db-kind=oracle
quarkus.datasource.jdbc.driver=oracle.jdbc.driver.OracleDriver
#quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver
quarkus.datasource.jdbc.url=jdbc:oracle:thin:#xxxxxxxxxxxx:1522/IVR
quarkus.datasource.username=${USERNAME_CONNECTION_BD:xxxxxxxx}
quarkus.datasource.password=${PASSWORD_CONNECTION_BD:xxxxxxxx.}
quarkus.http.port=${PORT:8082}
quarkus.http.ssl-port=${PORT-SSl:8083}
# Send output to a trace.log file under the /tmp directory
quarkus.log.file.path=/tmp/trace.log
quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n
# Configure a named handler that logs to console
quarkus.log.handler.console."STRUCTURED_LOGGING".format=%e%n
# Configure a named handler that logs to file
quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".enable=true
quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".format=%e%n
# Configure the category and link the two named handlers to it
quarkus.log.category."io.quarkus.category".level=INFO
quarkus.log.category."io.quarkus.category".handlers=STRUCTURED_LOGGING,STRUCTURED_LOGGING_FILE
quarkus.ssl.native=true
quarkus.http.ssl.certificate.key-store-file=${UBICATION_CERTIFICATE_SSL:srvdevrma1.jks}
quarkus.http.ssl.certificate.key-store-file-type=${TYPE_CERTIFICATE_SSL:JKS}
quarkus.http.ssl.certificate.key-store-password=${PASSWORD_CERTIFICATE_SSL:xxxxxxx}
quarkus.http.ssl.certificate.key-store-key-alias=${ALIAS_CERTIFICATE_SSL:xxxxxxxxx}
quarkus.native.add-all-charsets=true
quarkus.swagger-ui.path=/api/FindPukCodeBS/swagger-ui
quarkus.smallrye-openapi.path=/api/FindPukCodeBS/swagger
mp.openapi.extensions.smallrye.info.title=FindPukCodeBS
%dev.mp.openapi.extensions.smallrye.info.title=FindPukCodeBS
%test.mp.openapi.extensions.smallrye.info.title=FindPukCodeBS
mp.openapi.extensions.smallrye.info.version=1.0.1
mp.openapi.extensions.smallrye.info.description=Servicio que consulta el codigo puk asociado a una ICCID (SIMCARD)
mp.openapi.extensions.smallrye.info.termsOfService=Your terms here
mp.openapi.extensions.smallrye.info.contact.email=xxxxxxxxxxxxxxxxxxxx.com
mp.openapi.extensions.smallrye.info.contact.name=xxxxxxxxxxxxxxxxxx#telefonica.com
mp.openapi.extensions.smallrye.info.contact.url=http://exampleurl.com/contact
mp.openapi.extensions.smallrye.info.license.name=Apache 2.0
mp.openapi.extensions.smallrye.info.license.url=https://www.apache.org/licenses/LICENSE-2.0.html
What can be done in these cases?
The Swagger-UI is included by default only in dev mode.
To enable it on your application, you must set this parameter:
quarkus.swagger-ui.always-include=true
This parameter is build time, so you can't change it on your deploy. You must set it into your application.properties.
Reference
https://quarkus.io/guides/all-config#quarkus-swagger-ui_quarkus-swagger-ui-swagger-ui

Can't talk to HBase from different kubernetes namespace: java.net.UnknownHostException: hregion-0.hregion

I am using kubernetes, where I have a Hadoop cluster running in namespace 'platform'.
I have an example application running in namespace 'example'
The example application needs to talk to HBase. When it does so, we see the following error:
java.net.UnknownHostException: hregion-0.hregion
at java.net.InetAddress.getAllByName0(InetAddress.java:1280)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at java.net.InetAddress.getByName(InetAddress.java:1076)
at org.apache.hadoop.hbase.client.ConnectionUtils.getStubKey(ConnectionUtils.java:233)
at org.apache.hadoop.hbase.client.ConnectionImplementation.getClient(ConnectionImplementation.java:1192)
at org.apache.hadoop.hbase.client.ClientServiceCallable.setStubByServiceName(ClientServiceCallable.java:44)
at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:229)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
at org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1078)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:403)
at org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:445)
at org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:442)
at org.apache.hadoop.hbase.client.RpcRetryingCallable.call(RpcRetryingCallable.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:442)
The command
> nslookup hregion-0.hregion
on the client machine fails, because the hregion service is in the platform namespace (where that command will succeed).
We suspected that the HBase region server has registered itself with zookeeper using an incomplete name, and verified by connecting to the zookeeper server:
[zk: localhost:2181(CONNECTED) 8] ls /hbase/rs
[hregion-0.hregion,16020,1560851357442]
The ConnectionUtils.getStubKey method simply uses java.net.InetAddress.getByName(hostname) and it is this method which fails.
Here is some zookeeper debugging info (this from the HBase master):
hbase(main):001:0> zk_dump
HBase is rooted at /hbase
Active master address: hmaster-0.hmaster.platform.svc.cluster.local,16000,1560851357485
Backup master addresses:
Region server holding hbase:meta: hregion-0.hregion,16020,1560851357442
Region servers:
hregion-0.hregion,16020,1560851357442
On the hregion-0 server, we have the following entries in /etc/hosts:
# Kubernetes-managed hosts file.
127.0.0.1 localhost
10.1.14.53 hregion-0.hregion.platform.svc.cluster.local hregion-0
And the /etc/resolv.conf file looks like this:
nameserver 10.96.0.10
search platform.svc.cluster.local svc.cluster.local cluster.local mycompany.com
options ndots:5
How do I fix this? I assume I need to tell HBase to register its nodes in zookeeper using their fully qualified domain name - how?

Haproxy exporter unable to fetch data

I am using haproxy_exporter in prometheus and add prometheus as a datasource in grafana and the haproxy plugin using prometheus as a datasource in order to fetch haproxy stats and shown in grafana server. And i am not able to get the output from it.
When i run below command, I am getting error invalid URL port.
./haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="http://user:$(cat pwfile)192.168.1.10:10000/haproxy/stats;csv"
OUTPUT:
INFO[0000] Starting haproxy_exporter (version=0.9.0, branch=master, revision=0cae8ee3e3f3b7c517db2cc68f386672d8b1b6a7) source=haproxy_exporter.go:495
INFO[0000] Build context (go=go1.10.1, user=root#rlinux57, date=20180724-16:08:06) source=haproxy_exporter.go:496
INFO[0000] Listening on :9101 source=haproxy_exporter.go:521
**ERRO[0013] Can't scrape HAProxy: Get http://admin:abEDokA("192.168.1.10:10000/haproxy/stats;csv: invalid URL port abEDokA("192.168.1.10:10000" source=haproxy_exporter.go:315**
And when i placed # sign between password and IP address, such as ./haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv"
It gives below error:
INFO[0000] Starting haproxy_exporter (version=0.9.0, branch=master, revision=0cae8ee3e3f3b7c517db2cc68f386672d8b1b6a7) source=haproxy_exporter.go:495
INFO[0000] Build context (go=go1.10.1, user=root#rlinux57, date=20180724-16:08:06) source=haproxy_exporter.go:496
FATA[0000] parse http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv: net/url: invalid userinfo source=haproxy_exporter.go:500
And my prometheus settings are:
- job_name: 'haproxy'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9101']
You need the # in there and you might need to get rid of the " in your password. Maybe simply escaping it (\") could work, but the second error message suggests haproxy_exporter somehow correctly receives the URL as http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv but is then unable to parse it.
Yup, according to http://www.ietf.org/rfc/rfc1738.txt, " is not a valid character in a URL. You may get around it by using its escape, %22.

HAProxy 1.8 - Passing socket connection during HAProxy soft reload

I am using kubernetes load-lanacer(Here the haproxy configuration is written in every 10s and restarted). Since I want to pass the socket connection while reloading the HAProxy, I changed the Dockerfile of the HAProxy such that it uses HAProxy 1.8-dev2 version. The image used is haproxytech/haproxy-ubuntu:1.8-dev2. Also I added the following line under the global section of the template.cfg file(This is the template in which the HAProxy configuration is written)
stats socket /var/run/haproxy/admin.sock mode 660 level admin expose-fd listeners
Also I changed the reload command in haproxy_reload file as follows
haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -x /var/run/haproxy/admin.sock -sf $(cat /var/run/haproxy.pid)
Once I run the docker image I get the following error.(kubectl create -f rc.yaml --namespace load-balancer)
W1027 07:13:37.922565 5 service_loadbalancer.go:687] Requeuing kube-system/kube-dns because of error: error restarting haproxy -- [WARNING] 299/071337 (21) : We didn't get the expected number of sockets (expecting 1347703880 got 0)
[ALERT] 299/071337 (21) : Failed to get the sockets from the old process!
: exit status 1
FYI:
I commented the stats socket line in the template.cfg file and ran the docker image to verify whether the restart command identifies the socket. The same error occurred. Seems like the soft restart command doesn't identify the stats socket created by the HAProxy.

python-memcache memcached -- I installed on centos virtualbox but it get/set never seem to work

I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)