Monitors setup in local - cadence-workflow

I am trying to setup monitoring in local as mentioned in https://cadenceworkflow.io/docs/operation-guide/monitor/#instructions
Having these errors for http://host.docker.internal:9098/metrics, http://cadence:9090/metrics as shown in below image.
Can please let me know how we can resolve this, Thanks
Endpoints state

9090 is Prometheus itself. Are you configuring a different port? https://github.com/uber/cadence/blob/68fb2e60d1a2bff77c66acf60c954c9d19f9e5f5/docker/docker-compose-es-v7.yml#L14
But anyway, this is not something important so if you like, you can ignore this error.
9098 is the client sdk . The doc is assuming that you are setting it up correctly: https://github.com/uber/cadence-java-samples/blob/cdd43b6a65bf537ef6c77262a56cd22308d75e06/src/main/java/com/uber/cadence/samples/hello/HelloMetric.java#L53
https://github.com/uber-common/cadence-samples/blob/beacf223ab727c7fd114236f40806497c6d0aabd/config/development.yaml#L7

Related

Grafana on ECS cluster return not allowed origin

Created ECS cluster let's call it tools.
tools cluster have 2 services:
sso-proxy
grafana - opensource
Going through sso-proxy to get grafana.
when trying to enter credentials to grafana i got the following error:
When incognito mode the login working for the first time.
This is the ECS logs:
Please someone can help.
Grafana must receive proper Host header in the request header. I guess your "sso-proxy" doesn't do that. You didn't provide reproducible example (why not, when you want to help), so it is only a guess.
Lazy and insecure workaround will be Grafana downgrade to version 8.3.4-, where CSRF fix for CVE-2022-21703 is not included.

Usage of 'local' plugin in coredns

As I am new to coredns configurations in kubernetes and I'm trying to explore plugins provided by coredns in kubernetes. I see a plugin named local which will respond with a reply to local requests. But I could not understand a use-case where this plugin will be exactly useful for. Can someone explain with an example how it can be make use of? Also in unbound configuration man page, I see an option called local-zone: .
local-zone:
Configure a local zone. The type determines the answer to give if there is no match from local-data. The types are deny, refuse, static, transparent, redirect, nodefault, typetransparent, and are explained below. After that the default settings are listed. Use local-data: to enter data into the local zone. Answers for local zones are authoritative DNS answers. By default the zones are class IN.
nodefault:
Used to turn off default contents for AS112 zones. The other types also turn off default contents for the zone. The 'nodefault' option has no other effect than turning off default contents for the given zone.
Is this local plugin behaves similar to this unbound local-zone option? If not, Is there any plugin which act similar to local-zone in unbound ? I am expecting a coredns plugin to behave similar to local-data in unbound particularly for nodefault type in local-data. eg: local-zone: nodefault. It would be really helpful if someone helps me to clear out this. Thanks in advance!!!
As the author of the local plugin for CoreDNS explains, localhost.<searchpath> queries are hitting coredns, which is wrong. So, he wrote this plugin to intercept localhost.<'domain'> queries and return the correct response.
From the official CoreDNS github web page:
local will respond with a basic reply to a "local request". Local request are defined to be names in the following zones: localhost, 0.in-addr.arpa, 127.in-addr.arpa and 255.in-addr.arpa and any query asking for localhost.<domain>.
With local enabled any query falling under these zones will get a reply. This prevents the query from "escaping" to the internet and putting strain on external infrastructure.
You can check the code of the local plugin here. I don't see similar to the unbound local-zone nodefault functionality there.
See all in-tree plugins for CoreDNS here.

Configuring SSL in zookeeper

Can somebody help me to sort out the SSL connection to zookeeper,my questions is How to configure
CLIENT_JVMFLAGS in zkCli.cmd file in windows.
Ref :https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide
Set CLIENT JVMFLAGS at terminal by exporting exact as mentioned in that website(Before starting zkCli). (Don't get confused about zkCli, which is "zk command line" not "zkclient")
Client VM Args:
-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
-Dzookeeper.client.secure=true
-Dzookeeper.ssl.keyStore.location=C:/OpenSSL/bin/1KeyStore.jks
-Dzookeeper.ssl.keyStore.password=yourpass
-Dzookeeper.ssl.trustStore.location=C:/OpenSSL/bin/1truststore.jks
-Dzookeeper.ssl.trustStore.password=yourpass"
For more, Follow these steps to make connection and everything you need:
https://issues.apache.org/jira/browse/ZOOKEEPER-2125
Add this to the zkEnv.sh
export SERVER_JVMFLAGS="
-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
-Dzookeeper.ssl.keyStore.location=/root/zookeeper/ssl/testKeyStore.jks
-Dzookeeper.ssl.keyStore.password=testpass
-Dzookeeper.ssl.trustStore.location=/root/zookeeper/ssl/testTrustStore.jks
-Dzookeeper.ssl.trustStore.password=testpass"
Also you might need to add more Heap.
Follow this page:
ZooKeeper SSL User Guide

Mulesoft - Uh-oh spaghettios! There's nothing here

This error is driving me nuts...
Situation:
I am trying to create a REST api and use a api-gateway proxy to access it. Proxy URL is HTTPS.
The deployment goes through fine. No errors reported in the logs. Worker assigned.
However when I try to access through browser get the "Uh-oh spaghettios! There's nothing here.".
Have tried all the usual things like making the https port dynamic using ${https.port} and using 0.0.0.0 instead of localhost in the http-listener config. But that does not help. Has this something to got to do with the proxy version ?
Any help or pointers will be great!
Make sure you follow Steps 2 from below link
Getting Started with Connectors
All,
Got the resolution. The problem was with the certificate chain. The keystore did not contain intermediate certificates. When added to the keystore the connectivity worked fine.
Only if Mulesoft provided correct errors or detailed logging, I would have saved lot of time over this.
Thanks for your inputs.

ElasticSearch with Play 2 configuration

I am trying to use the ElasticSearch module (https://github.com/cleverage/play2-elasticsearch) with my Play 2 application. In the readme, it says I should add the following to my application.conf:
## define local mode or not
elasticsearch.local=false
## list clients
elasticsearch.client="192.168.0.46:9300"
# ex : elasticsearch.client="192.168.0.46:9300,192.168.0.47:9300"
What is local mode? What is my client URL supposed to be? I can not find any information on what these options should be. With my current options, I get a NoNodeAvailableException.
Some people suggest:
elasticsearch.local=false elasticsearch.client=mynode1:9200,mynode2:9200
But what is mynode1 and mynode2? It doesn't work with my application. Can anyone help? Thanks
What is local mode?
If elaticsearch.local=true, a elasticsearch node is started in your application ( embedded )
What is my client URL supposed to be?
It's your host:port, but the port is the tcp transport define on your elasticsearch node.
By default, the port start on 9300 ( http://www.elasticsearch.org/guide/reference/modules/transport.html )
I can not find any information on what these options should be. With my current options, I get a NoNodeAvailableException.
I think you have a problem on port number.
mynode1 and mynode2 are elasticsearch nodes.
Do you have any Elasticsearch node running?
On which IP address?
Can you try to connect on these nodes using curl, for example:
curl localhost:9200
Or
curl YOURIPADDRESS:9200
If one of this is successful, then configure your play app using YOURIPADDRESS:9300 as Nicolas Boire wrote before.
If no one is successful, check that you have installed Elasticsearch and launched it before.
HTH
I've just had the same problem, be sure that you respect the version requirements written in the table : https://github.com/cleverage/play2-elasticsearch
At the beginning, I set up the latest version of the plugin 0.8.1 but my ElasticSearch version was 1.0.2.
By starting ES with version 0.9.13, it worked.