How to use Swagger in Quarkus with Ingress-Ngnix Kubernetes - kubernetes

Good Afternoon. I'm trying to use Swagger in Quarkus and locally it works great for me, however when I upload it to the production environment I'm using Ingress-Nginx as a reverse proxy in a Kubernetes cluster and I'm running into a problem, I don't allows you to view the Swagger interface:
Postman Local:
Swaager Local:
Postman Kubernetes Environment with Ingress-Nginx:
Swaager-Ui in Kubernetes Environment with Ingress-Nginx:
My application.properties:
quarkus.datasource.db-kind=oracle
quarkus.datasource.jdbc.driver=oracle.jdbc.driver.OracleDriver
#quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver
quarkus.datasource.jdbc.url=jdbc:oracle:thin:#xxxxxxxxxxxx:1522/IVR
quarkus.datasource.username=${USERNAME_CONNECTION_BD:xxxxxxxx}
quarkus.datasource.password=${PASSWORD_CONNECTION_BD:xxxxxxxx.}
quarkus.http.port=${PORT:8082}
quarkus.http.ssl-port=${PORT-SSl:8083}
# Send output to a trace.log file under the /tmp directory
quarkus.log.file.path=/tmp/trace.log
quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n
# Configure a named handler that logs to console
quarkus.log.handler.console."STRUCTURED_LOGGING".format=%e%n
# Configure a named handler that logs to file
quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".enable=true
quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".format=%e%n
# Configure the category and link the two named handlers to it
quarkus.log.category."io.quarkus.category".level=INFO
quarkus.log.category."io.quarkus.category".handlers=STRUCTURED_LOGGING,STRUCTURED_LOGGING_FILE
quarkus.ssl.native=true
quarkus.http.ssl.certificate.key-store-file=${UBICATION_CERTIFICATE_SSL:srvdevrma1.jks}
quarkus.http.ssl.certificate.key-store-file-type=${TYPE_CERTIFICATE_SSL:JKS}
quarkus.http.ssl.certificate.key-store-password=${PASSWORD_CERTIFICATE_SSL:xxxxxxx}
quarkus.http.ssl.certificate.key-store-key-alias=${ALIAS_CERTIFICATE_SSL:xxxxxxxxx}
quarkus.native.add-all-charsets=true
quarkus.swagger-ui.path=/api/FindPukCodeBS/swagger-ui
quarkus.smallrye-openapi.path=/api/FindPukCodeBS/swagger
mp.openapi.extensions.smallrye.info.title=FindPukCodeBS
%dev.mp.openapi.extensions.smallrye.info.title=FindPukCodeBS
%test.mp.openapi.extensions.smallrye.info.title=FindPukCodeBS
mp.openapi.extensions.smallrye.info.version=1.0.1
mp.openapi.extensions.smallrye.info.description=Servicio que consulta el codigo puk asociado a una ICCID (SIMCARD)
mp.openapi.extensions.smallrye.info.termsOfService=Your terms here
mp.openapi.extensions.smallrye.info.contact.email=xxxxxxxxxxxxxxxxxxxx.com
mp.openapi.extensions.smallrye.info.contact.name=xxxxxxxxxxxxxxxxxx#telefonica.com
mp.openapi.extensions.smallrye.info.contact.url=http://exampleurl.com/contact
mp.openapi.extensions.smallrye.info.license.name=Apache 2.0
mp.openapi.extensions.smallrye.info.license.url=https://www.apache.org/licenses/LICENSE-2.0.html
What can be done in these cases?

The Swagger-UI is included by default only in dev mode.
To enable it on your application, you must set this parameter:
quarkus.swagger-ui.always-include=true
This parameter is build time, so you can't change it on your deploy. You must set it into your application.properties.
Reference
https://quarkus.io/guides/all-config#quarkus-swagger-ui_quarkus-swagger-ui-swagger-ui

Related

Metrics from spring batch are not pushed to prometheus push gateway

I followed the approaches mentioned in this post. Basically I have my local prometheus and push gateway setup using docker from spring batch examples.
I have the below dependencies added in my build.gradle which means PrometheusPushGatewayManager bean is auto-configured and needs to push metrics to the gateway configured.
implementation("io.micrometer:micrometer-registry-prometheus:1.8.4")
implementation("io.prometheus:simpleclient_pushgateway:0.16.0")
My application.yml looks like below
metrics:
export:
prometheus:
enabled: true
pushgateway:
enabled: true
base-url: http://0.0.0.0:9091
job: main-job
push-rate: 5s
descriptions: true
But when I navigate to /metrics endpoint, the metrics are having values as 0.
example :
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="process-5.csv",status="FAILED"} 0
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="process-6.csv",status="COMPLETED"} 0
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="process-7.csv",status="FAILED"} 0
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="process-2csv",status="FAILED"} 0
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="start-job-job",status="COMPLETED"} 0
I've checked this post, which indicates that we need to configure a registry but if I'm using the auto configured PrometheusPushGatewayManager by adding the simpleclient_pushgateway dependency, how to configure a registry ?
Keeping a breakpoint and viewing the value of Metrics.globalRegistry.meters[1] shows values like SampleImpl{duration(seconds)=392.074203242, duration(nanos)=3.92074203242E11, startTimeNanos=1098399187818886}. So they are captured but not pushed properly.
Am I missing something to configure for getting the metrics pushed properly to the gateway ?

Consul agent on kubernetes, on node or pod?

I deployed an aws eks cluster via terraform. I also deployed Consul following hasicorp’s tutorial and I see the nodes in consul’s UI.
Now I’m wondering how al the consul agents will know about the pods I deploy? I deploy something and it’s not shown anywhere on consul.
I can’t find any documentation as to how to register pods (services) on consul via the node’s consul agent, do I need to configure that somewhere? Should I not use the node’s agent and register the service straight from the pod? Hashicorp discourages this since it may increase resource utilization depending on how many pods one deploy on a given node. But then how does the node’s agent know about my services deployed on that node?
Moreover, when I deploy a pod in a node and ssh into the node, and install consul, consul’s agent can’t find the consul server (as opposed from the node, which can find it)
EDIT:
Bottom line is I can't find WHERE to add the configuration. If I execute ON THE POD:
consul members
It works properly and I get:
Node Address Status Type Build Protocol DC Segment
consul-consul-server-0 10.0.103.23:8301 alive server 1.10.0 2 full <all>
consul-consul-server-1 10.0.101.151:8301 alive server 1.10.0 2 full <all>
consul-consul-server-2 10.0.102.112:8301 alive server 1.10.0 2 full <all>
ip-10-0-101-129.ec2.internal 10.0.101.70:8301 alive client 1.10.0 2 full <default>
ip-10-0-102-175.ec2.internal 10.0.102.244:8301 alive client 1.10.0 2 full <default>
ip-10-0-103-240.ec2.internal 10.0.103.245:8301 alive client 1.10.0 2 full <default>
ip-10-0-3-223.ec2.internal 10.0.3.249:8301 alive client 1.10.0 2 full <default>
But if i execute:
# consul agent -datacenter=voip-full -config-dir=/etc/consul.d/ -log-file=log-file -advertise=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
I get the following error:
==> Starting Consul agent...
Version: '1.10.1'
Node ID: 'f10070e7-9910-06c7-0e12-6edb6cc4c9b9'
Node name: 'ip-10-0-3-223.ec2.internal'
Datacenter: 'voip-full' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 10.0.3.223 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
2021-08-16T18:23:06.936Z [WARN] agent: skipping file /etc/consul.d/consul.env, extension must be .hcl or .json, or config format must be set
2021-08-16T18:23:06.936Z [WARN] agent: Node name "ip-10-0-3-223.ec2.internal" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2021-08-16T18:23:06.946Z [WARN] agent.auto_config: skipping file /etc/consul.d/consul.env, extension must be .hcl or .json, or config format must be set
2021-08-16T18:23:06.947Z [WARN] agent.auto_config: Node name "ip-10-0-3-223.ec2.internal" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2021-08-16T18:23:06.948Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: ip-10-0-3-223.ec2.internal 10.0.3.223
2021-08-16T18:23:06.948Z [INFO] agent.router: Initializing LAN area manager
2021-08-16T18:23:06.950Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=udp
2021-08-16T18:23:06.950Z [WARN] agent.client.serf.lan: serf: Failed to re-join any previously known node
2021-08-16T18:23:06.950Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=tcp
2021-08-16T18:23:06.951Z [INFO] agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http
2021-08-16T18:23:06.951Z [WARN] agent: DEPRECATED Backwards compatibility with pre-1.9 metrics enabled. These metrics will be removed in a future version of Consul. Set `telemetry { disable_compat_1.9 = true }` to disable them.
2021-08-16T18:23:06.953Z [INFO] agent: started state syncer
2021-08-16T18:23:06.953Z [INFO] agent: Consul agent running!
2021-08-16T18:23:06.953Z [WARN] agent.router.manager: No servers available
2021-08-16T18:23:06.954Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No known Consul servers"
2021-08-16T18:23:34.169Z [WARN] agent.router.manager: No servers available
2021-08-16T18:23:34.169Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No known Consul servers"
So where to add the config?
I also tried adding a service in k8s pointing to the pod, but the service doesn't come up on consul's UI...
What do you guys recommend?
Thanks
Consul knows where these services are located because each service
registers with its local Consul client. Operators can register
services manually, configuration management tools can register
services when they are deployed, or container orchestration platforms
can register services automatically via integrations.
if you planning to use manual option you have to register the service into the consul.
Something like
echo '{
"service": {
"name": "web",
"tags": [
"rails"
],
"port": 80
}
}' > ./consul.d/web.json
You can find the good example at : https://thenewstack.io/implementing-service-discovery-of-microservices-with-consul/
Also this is a very nice document for having detailed configuration of the health check and service discovery : https://cloud.spring.io/spring-cloud-consul/multi/multi_spring-cloud-consul-discovery.html
Official document : https://learn.hashicorp.com/tutorials/consul/get-started-service-discovery
BTW, I was finally able to figure out the issue.
consul-dns is not deployed by default, i had to manually deploy it, then forward all .consul requests from coredns to consul-dns.
All is working now. Thanks!

Spring Boot Admin not resolving URLS for health and manage

Spring Version: 5.0.8.RELEASE
Spring Boot Dependencies Version: 2.0.4.RELEASE
Java Version: 1.8.0_131
Spring Boot Admin reports that a client is down. However I can see that the client is running by navigating to it in the browser. On the details view for the client in Sprint Boot Admin the message under the health section is "Fetching health failed, Network Error". There are three URLs shown in the header of the details page:
http://localhost:8090/WorkOrderPrinting
http://localhost:8090/WorkOrderPrinting/manage
http://localhost:8090/WorkOrderPrinting/manage/health
Clicking them opens the respective views. Here is the output from the health view:
{"status":"UP","details":{"diskSpace":{"status":"UP","details":{"total":80482930688,"free":77726302208,"threshold":10485760}},"db":{"status":"UP","details":{"tenantRoutingDataSource":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"userSetTenantRoutingDS":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"dataSourceCbCommon":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"dataSourceCbOrg":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}}}}}}
It seems to be telling me that the client application is up and running. So I am not sure why Sprint Boot Admin UI is not reflecting that.
I have a second application running as a client and its working as expected. The result of the health URL is the same.
{"status":"UP","details":{"diskSpace":{"status":"UP","details":{"total":80482930688,"free":77726248960,"threshold":10485760}},"db":{"status":"UP","details":{"tenantRoutingDataSource":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"userSetTenantRoutingDS":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"dataSourceCbCommon":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"dataSourceCbOrg":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}}}}}}
The Spring Boot Admin Server log shows this error for Work Order Printing. Again, not sure why the URLs are working, but the log shows an error.
2018-09-13 09:19:19.071 DEBUG 5208 --- [ parallel-2] d.c.b.a.server.services.StatusUpdater : Update status for Instance(id=6212ad7c5ab4, version=1, registration=Registration(name=Work Order Printing Development, managementUrl=http://localhost:8090/WorkOrderPrinting/manage, healthUrl=http://localhost:8090/WorkOrderPrinting/manage/health, serviceUrl=http://localhost:8090/WorkOrderPrinting, source=http-api), registered=true, statusInfo=StatusInfo(status=DOWN, details={error=Found, status=302}), statusTimestamp=2018-09-13T13:15:08.169Z, info=Info(values={}), endpoints=Endpoints(endpoints={health=Endpoint(id=health, url=http://localhost:8090/WorkOrderPrinting/manage/health)}), buildVersion=null)
Spring Boot Admin Server Config
server.servlet.context-path=/adminserver
logging.file=/var/log/eti/webui/adminserver.log
logging.level.de.codecentric.boot.admin.server=INFO
Failing Client Config
#Admin Panel config
#
#This is the URL for the admin panel that this application will send its information to
spring.boot.admin.client.url=http://localhost:8080/adminserver
#This is required when deploying to Tomcat because the Admin panel cant seem to determine what the URL will be on its own
spring.boot.admin.client.instance.service-base-url=http://localhost:8090
#This is the name that will be displayed in the admin panel for this application
spring.boot.admin.client.instance.name=Work Order Printing
#
spring.boot.admin.auto-registration=true
#
#Actuator config needed to expose endpoints to admin panel
#
management.endpoints.web.base-path=/manage
management.endpoints.web.exposure.include:*
management.endpoint.health.show-details=always
Working Client Config
#Admin Panel config
#
#This is the URL for the admin panel that this application will send its information to
spring.boot.admin.client.url=http://localhost:8080/adminserver
#This is required when deploying to Tomcat because the Admin panel cant seem to determine what the URL will be on its own
spring.boot.admin.client.instance.service-base-url=http://localhost:8085
#This is the name that will be displayed in the admin panel for this application
spring.boot.admin.client.instance.name=LaunchPad
spring.boot.admin.auto-registration=true
#
#Actuator config needed to expose endpoints to admin panel
#
management.endpoints.web.base-path=/manage
management.endpoints.web.exposure.include:*
management.endpoint.health.show-details=always
So the problem turned out to be a configuration issue on my end. The Admin Server log shows the following:
2018-09-13 09:19:19.071 DEBUG 5208 --- [ parallel-2] d.c.b.a.server.services.StatusUpdater : Update status for Instance(id=6212ad7c5ab4, version=1, registration=Registration(name=Work Order Printing Development, managementUrl=http://localhost:8090/WorkOrderPrinting/manage, healthUrl=http://localhost:8090/WorkOrderPrinting/manage/health, serviceUrl=http://localhost:8090/WorkOrderPrinting, source=http-api), registered=true, statusInfo=StatusInfo(status=DOWN, details={error=Found, status=302}), statusTimestamp=2018-09-13T13:15:08.169Z, info=Info(values={}), endpoints=Endpoints(endpoints={health=Endpoint(id=health, url=http://localhost:8090/WorkOrderPrinting/manage/health)}), buildVersion=null)
Its basically saying that there is a 302 (redirect) happening so it cant reach the URL. The reason for this that I forgot to allow access to the URLs in Spring Security config. I could get to them with the browser because I was logged in. Spring Boot Admin could not, because it was not logged in.
I added a rule to allow access to the /manage/ urls
public void configure(WebSecurity web) throws Exception
{
web.ignoring().antMatchers("/css/**", "/fonts/**", "/img/**", "/js/**", "/close", "/webjars/**", "/manage/**");
}

Haproxy exporter unable to fetch data

I am using haproxy_exporter in prometheus and add prometheus as a datasource in grafana and the haproxy plugin using prometheus as a datasource in order to fetch haproxy stats and shown in grafana server. And i am not able to get the output from it.
When i run below command, I am getting error invalid URL port.
./haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="http://user:$(cat pwfile)192.168.1.10:10000/haproxy/stats;csv"
OUTPUT:
INFO[0000] Starting haproxy_exporter (version=0.9.0, branch=master, revision=0cae8ee3e3f3b7c517db2cc68f386672d8b1b6a7) source=haproxy_exporter.go:495
INFO[0000] Build context (go=go1.10.1, user=root#rlinux57, date=20180724-16:08:06) source=haproxy_exporter.go:496
INFO[0000] Listening on :9101 source=haproxy_exporter.go:521
**ERRO[0013] Can't scrape HAProxy: Get http://admin:abEDokA("192.168.1.10:10000/haproxy/stats;csv: invalid URL port abEDokA("192.168.1.10:10000" source=haproxy_exporter.go:315**
And when i placed # sign between password and IP address, such as ./haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv"
It gives below error:
INFO[0000] Starting haproxy_exporter (version=0.9.0, branch=master, revision=0cae8ee3e3f3b7c517db2cc68f386672d8b1b6a7) source=haproxy_exporter.go:495
INFO[0000] Build context (go=go1.10.1, user=root#rlinux57, date=20180724-16:08:06) source=haproxy_exporter.go:496
FATA[0000] parse http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv: net/url: invalid userinfo source=haproxy_exporter.go:500
And my prometheus settings are:
- job_name: 'haproxy'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9101']
You need the # in there and you might need to get rid of the " in your password. Maybe simply escaping it (\") could work, but the second error message suggests haproxy_exporter somehow correctly receives the URL as http://admin:abEDokA("#192.168.1.10:10000/haproxy/stats;csv but is then unable to parse it.
Yup, according to http://www.ietf.org/rfc/rfc1738.txt, " is not a valid character in a URL. You may get around it by using its escape, %22.

Can not integrate TeleStax Restcomm in MetaSwitch Clearwater

I really want to study how restcomm works in clearwater as a Telephony Application Server.
I follow the guideline at:
http://telestax.com/wp-content/uploads/2013/12/ClearWater-RestComm-Integration-2013.pdf
But seemly, the verion of Restcomm in this article is too old (TelScale-Restcomm-JBoss-AS7-7.1.2-GA), and I am using the Restcomm in newer version (Restcomm-JBoss-AS7-7.7.0.900).
I could not follow the guide in this article because of some difference configuration between two versions.
I set up the clearwater successfully. I could make a SIP call in clearwater.
When I setup the restcomm (version Restcomm-JBoss-AS7-7.7.0.900),
I changed the local-address of media-server in file: standalone/deployments/restcomm.war/WEB-INF/conf/restcomm.xml
as follow:
<media-server-manager>
...
<local-address>192.168.0.117</local-address>
...
</media-server-manager>
(192.168.0.117 is my local IP address)
I did not change the references to 127.0.0.1:8080 in restcomm.xml file to point to 192.168.0.117:8180
because there is no references to 127.0.0.1:8080.
I think that may be the difference between two versions.
I also did not edit the JAVA_OPTS in bin/standalone.conf file because of misunderstanding.
I edit the file mediaserver/deploy/server-beans.xml as follow:
<property name="bindAddress">192.168.0.117</property>
<property name="localBindAddress">127.0.0.1</property>
<property name="externalAddress"><null/></property>
<property name="localNetwork">192.168.0.0</property>
<property name="localSubnet">255.255.255.0</property>
After that, I start media-server:
$ cd ${JBOSS_HOME}/mediaserver/bin
$ ./run.sh
The media-server start successfully.
Then, I start restcomm jboss:
$ cd ${JBOSS_HOME}/bin
$ sudo ./standalone.sh -Djboss.socket.binding.port-offset=100 -b 192.168.0.117
It got errors as the below picture.
enter image description here
But Jboss server still work, when I goto http:/192.168.0.117:8180
But I can not access the Restcomm managerment interface.
I also try to modify somes as the article:
-Modify default app: standalone/deployments/restcomm.war/demos/hello-play.xml
<Response>
<Play>http://192.168.0.117:8180/restcomm/audio/demo-prompt.wav</Play>
</Response>
-Add configure IMS core through Ellis configure file:
{
"Restcomm" :
"<InitialFilterCriteria><Priority>1</Priority><TriggerPoint> <ConditionTypeCNF></ConditionTypeCNF><SPT><ConditionNegated>0</ConditionNegated><Group>0</Group><Method>INVITE</Method><Extension></Extension></SPT></TriggerPoint><ApplicationServer><ServerName>sip:192.168.0.117:5180</ServerName><DefaultHandling>0</DefaultHandling></ApplicationServer></InitialFilterCriteria>"
}
-Bind the number to defaul app:
curl -X POST http://ACae6e420f425248d6a26948c17a9e2acf:77f8c12cc7b8f8423e5c38b035249166#192.168.0.117:8180/restcomm/2012-04-24/Accounts/ACae6e420f425248d6a26948c17a9e2acf/IncomingPhoneNumbers.json -d "PhoneNumber=4321" -d "VoiceUrl=http://192.168.0.117:8180/restcomm/demos/hello-play.xml"
It got the error:
That are my problems.
Thank you very much for supporting me.
Best Regards,
Indeed those steps are way too old and won't probably work on the new version.
I would recommend starting Restcomm with Docker instead and configure the JVM options and port offset (see http://docs.telestax.com/restcomm-docker-environment-variables/) in the docker run command
The rest of the description to configure Clearwater should still be valid.