When I try to create a MUC room (as described in the official docs) using Postman with the following call:
POST /api/create_room
{
"name": "testRoom",
"service": "conference.xmpp.localhost",
"host": "xmpp.localhost"
}
The server returns 0, but when calling for the second time it returns 1.
The token used has all the scopes and when I call other methods the call succeeds.
mod_muc_admin module is enabled.
EDIT 25 Jul 2020
The configuration i'm using is as follow:
###
### ejabberd configuration file
###
### The parameters used in this configuration file are explained at
###
### https://docs.ejabberd.im/admin/configuration
###
### The configuration file is written in YAML.
### *******************************************************
### ******* !!! WARNING !!! *******
### ******* YAML IS INDENTATION SENSITIVE *******
### ******* MAKE SURE YOU INDENT SECTIONS CORRECTLY *******
### *******************************************************
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
###
hosts:
- localhost
- xmpp.localhost
- conference.xmpp.localhost
loglevel: 5
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
log_rate_limit: 100
certfiles:
- /home/ejabberd/conf/server.pem
ca_file: "/home/ejabberd/conf/cacert.pem"
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: true
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
"/oauth": ejabberd_oauth
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
s2s_use_starttls: optional
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- "admin#localhost"
- "admin#xmpp.localhost"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:example-admin#example.com"
ca_url: "https://acme-v01.api.letsencrypt.org"
sql_type: pgsql
sql_server: "postgres"
sql_database: "ejabberd"
sql_username: "ejabberd"
sql_password: "*************"
auth_method: sql
auth_password_format: scram
default_db: sql
commands_admin_access: configure
commands:
- add_commands:
- user
oauth_expire: 3600
oauth_access: all
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
mod_last: {}
mod_mam:
compress_xml: true
db_type: sql
assume_mam_usage: true
default: always
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
allow_private_messages: false
allow_user_invites: true
mam: true
persistent: true
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_sip: {}
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
That was the expected behavior. According to the docs:
Result:
res :: integer : Status code (0 on success, 1 otherwise)
0 stand for success and 1 stand for failure, not the opposite.
Related
I have a 4 node K8s cluster set up via kubeadm on a local VM cluster. I am using the following:
Kubernetes 1.24
Helm 3.10.0
kube-prometheus-stack Helm chart 41.7.4 (app version 0.60.1)
When I go into either Prometheus or Alertmanager, there are many alerts that are always firing. Another thing to note is that Alertmanager "cluster status" is reporting as "disabled". Not sure what bearing (if any) that may have on this. I have not added any new alerts of my own - everything was presumably deployed with the Helm chart.
I do not understand what these alerts are triggering for other than what I can glean from their names. It does not seem a good thing that these alerts should be firing. Either there is something seriously wrong with the cluster or something is poorly configured in the alerting configuration of the Helm chart. I'm leaning toward the second case, but will admit, I really don't know.
Here is a listing of the firing alerts, along with label info:
etcdMembersDown
alertname=etcdMembersDown, job=kube-etcd, namespace=kube-system, pod=etcd-gagnon-m1, service=prometheus-stack-kube-prom-kube-etcd, severity=critical
etcdInsufficientMembers
alertname=etcdInsufficientMembers, endpoint=http-metrics, job=kube-etcd, namespace=kube-system, pod=etcd-gagnon-m1, service=prometheus-stack-kube-prom-kube-etcd, severity=critical
TargetDown
alertname=TargetDown, job=kube-scheduler, namespace=kube-system, service=prometheus-stack-kube-prom-kube-scheduler, severity=warning
alertname=TargetDown, job=kube-etcd, namespace=kube-system, service=prometheus-stack-kube-prom-kube-etcd, severity=warning
alertname=TargetDown, job=kube-proxy, namespace=kube-system, service=prometheus-stack-kube-prom-kube-proxy, severity=warning
alertname=TargetDown, job=kube-controller-manager, namespace=kube-system, service=prometheus-stack-kube-prom-kube-controller-manager, severity=warning
KubePodNotReady
alertname=KubePodNotReady, namespace=monitoring, pod=prometheus-stack-grafana-759774797c-r44sb, severity=warning
KubeDeploymentReplicasMismatch
alertname=KubeDeploymentReplicasMismatch, container=kube-state-metrics, deployment=prometheus-stack-grafana, endpoint=http, instance=192.168.42.19:8080, job=kube-state-metrics, namespace=monitoring, pod=prometheus-stack-kube-state-metrics-848f74474d-gp6pw, service=prometheus-stack-kube-state-metrics, severity=warning
KubeControllerManagerDown
alertname=KubeControllerManagerDown, severity=critical
KubeProxyDown
alertname=KubeProxyDown, severity=critical
KubeSchedulerDown
alertname=KubeSchedulerDown, severity=critical
Here is my values.yaml:
defaultRules:
create: true
rules:
alertmanager: true
etcd: true
configReloaders: true
general: true
k8s: true
kubeApiserverAvailability: true
kubeApiserverBurnrate: true
kubeApiserverHistogram: true
kubeApiserverSlos: true
kubeControllerManager: true
kubelet: true
kubeProxy: true
kubePrometheusGeneral: true
kubePrometheusNodeRecording: true
kubernetesApps: true
kubernetesResources: true
kubernetesStorage: true
kubernetesSystem: true
kubeSchedulerAlerting: true
kubeSchedulerRecording: true
kubeStateMetrics: true
network: true
node: true
nodeExporterAlerting: true
nodeExporterRecording: true
prometheus: true
prometheusOperator: true
prometheus:
enabled: true
ingress:
enabled: true
ingressClassName: nginx
hosts:
- prometheus.<hidden>
paths:
- /
pathType: ImplementationSpecific
grafana:
enabled: true
ingress:
enabled: true
ingressClassName: nginx
hosts:
- grafana.<hidden>
path: /
persistence:
enabled: true
size: 10Gi
alertmanager:
enabled: true
ingress:
enabled: true
ingressClassName: nginx
hosts:
- alerts.<hidden>
paths:
- /
pathType: ImplementationSpecific
config:
global:
slack_api_url: '<hidden>'
route:
receiver: "slack-default"
group_by:
- alertname
- cluster
- service
group_wait: 30s
group_interval: 5m # 5m
repeat_interval: 2h # 4h
routes:
- receiver: "slack-warn-critical"
matchers:
- severity =~ "warning|critical"
continue: true
receivers:
- name: "null"
- name: "slack-default"
slack_configs:
- send_resolved: true # false
channel: "#alerts-test"
- name: "slack-warn-critical"
slack_configs:
- send_resolved: true # false
channel: "#alerts-test"
kubeControllerManager:
service:
enabled: true
ports:
http: 10257
targetPorts:
http: 10257
serviceMonitor:
https: true
insecureSkipVerify: "true"
kubeEtcd:
serviceMonitor:
scheme: https
servername: <do I need it - don't know what this should be>
cafile: <do I need it - don't know what this should be>
certFile: <do I need it - don't know what this should be>
keyFile: <do I need it - don't know what this should be>
kubeProxy:
serviceMonitor:
https: true
kubeScheduler:
service:
enabled: true
ports:
http: 10259
targetPorts:
http: 10259
serviceMonitor:
https: true
insecureSkipVerify: "true"
Is there something wrong with this configuration? Are there any Kubernetes objects that might be missing or misconfigured? It seems very odd that one could install this Helm chart and experience this many "failures". Is there perhaps, a major problem with my cluster? I would think that if there was really something wrong with etcd, the kube-scheduler or kube-proxy that I would experience problems everywhere, but I am not.
If there is any other information I can pull from the cluster or related artifacts that might help, let me know and I will include them.
I'm trying to create a room in ejabberd with REST API.
I'm using /create_room API to create room as mentioned in the documentation.
When I tried to hit the API, I receive the response as 0 which means success (as mentioned in the documentation).
When I checked the list of rooms in Pidgin or in Admin interface, I didn't see any new rooms created.
When I try to delete the room with destroy_room API, It says, Room not available.
Data I send with create_room API:
{
"name": "room1",
"service": "conference.localhost",
"host": "localhost"
}
My ejabberd.yml file:
###
### ejabberd configuration file
###
### The parameters used in this configuration file are explained at
###
### https://docs.ejabberd.im/admin/configuration
###
### The configuration file is written in YAML.
### *******************************************************
### ******* !!! WARNING !!! *******
### ******* YAML IS INDENTATION SENSITIVE *******
### ******* MAKE SURE YOU INDENT SECTIONS CORRECTLY *******
### *******************************************************
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
###
hosts:
- localhost
loglevel: 4
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
log_rate_limit: 100
certfiles:
- /home/ejabberd/conf/server.pem
ca_file: "/home/ejabberd/conf/cacert.pem"
## When using let's encrypt to generate certificates
##certfiles:
## - /etc/letsencrypt/live/localhost/fullchain.pem
## - /etc/letsencrypt/live/localhost/privkey.pem
##
##ca_file: "/etc/letsencrypt/live/localhost/fullchain.pem"
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: false
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
"/oauth": ejabberd_oauth
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
-
port: 5281
module: ejabberd_http
ip: 127.0.0.1
request_handlers:
/api: mod_http_api
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
##
## https://docs.ejabberd.im/admin/configuration/#stun-and-turn
## ejabberd_stun: Handles STUN Binding requests
##
##-
## port: 3478
## ip: "0.0.0.0"
## transport: udp
## module: ejabberd_stun
## use_turn: true
## turn_ip: "{{ IP }}"
## auth_type: user
## auth_realm: "example.com"
##-
## port: 3478
## ip: "0.0.0.0"
## module: ejabberd_stun
## use_turn: true
## turn_ip: "{{ IP }}"
## auth_type: user
## auth_realm: "example.com"
##-
## port: 5349
## ip: "0.0.0.0"
## module: ejabberd_stun
## certfile: "/home/ejabberd/conf/server.pem"
## tls: true
## use_turn: true
## turn_ip: "{{ IP }}"
## auth_type: user
## auth_realm: "example.com"
##
## https://docs.ejabberd.im/admin/configuration/#sip
## To handle SIP (VOIP) requests:
##
##-
## port: 5060
## ip: "0.0.0.0"
## transport: udp
## module: ejabberd_sip
##-
## port: 5060
## ip: "0.0.0.0"
## module: ejabberd_sip
##-
## port: 5061
## ip: "0.0.0.0"
## module: ejabberd_sip
## tls: true
s2s_use_starttls: optional
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- "admin#localhost"
apicommands:
user:
- "admin#localhost"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"API used from localhost allows all calls":
who:
ip: 127.0.0.1/8
what:
- "*"
- "!stop"
- "!start"
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
"some playing":
from:
- ejabberd_ctl
- mod_http_api
who:
acl: apicommands
what: "*"
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:example-admin#example.com"
ca_url: "https://acme-staging-v02.api.letsencrypt.org/directory"
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: never
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
mam: false
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_sip: {}
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
I have a configuration similar to yours. I didn't care to set exactly the same, because the problem I imagine is that you create room with default options, that means, a temporary room: if it has no occupants, it is destroyed a few seconds later.
I use this script:
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
import requests
from requests.auth import HTTPBasicAuth
url = "http://localhost:5280/api/create_room"
data = {
"name": "room1",
"service": "conference.localhost",
"host": "localhost"
}
res = requests.post(url, json=data, auth=HTTPBasicAuth("admin#localhost", "asd"))
print(res)
Create the room and check it exists:
$ python3 create-room.py
<Response [200]>
$ ejabberdctl muc_online_rooms conference.localhost
room1#conference.localhost
The ejabberd log file shows the API query, and 30 seconds later the room is destroyed due to inactivity:
2022-07-18 11:53:05.327862+02:00 [info] (<0.1317.0>) Accepted connection [::1]:43856 -> [::1]:5280
2022-07-18 11:53:05.328232+02:00 [info] API call create_room [{<<"name">>,<<"room1">>},
{<<"service">>,<<"conference.localhost">>},
{<<"host">>,<<"localhost">>}] from ::1:43856
2022-07-18 11:53:35.329246+02:00 [info] Destroyed MUC room room1#conference.localhost because it's temporary and empty
2022-07-18 11:53:35.330000+02:00 [info] Stopping MUC room room1#conference.localhost
If you now check the list of online rooms, there is none now:
$ ejabberdctl muc_online_rooms conference.localhost
$
There are three solutions:
A) When creating a room, it's temporary by default, join some occupant before it gets destroyed (30 seconds)
B) Configure in ejabberd.yml that mod_muc rooms newly created are persistent by default:
modules:
mod_muc:
default_room_options:
persistent: true
C) When creating a room, configure it to be persistent:
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
import requests
from requests.auth import HTTPBasicAuth
url = "http://localhost:5280/api/create_room_with_opts"
data = {
"name": "room1",
"service": "conference.localhost",
"host": "localhost",
"options": [{
"name": "persistent",
"value": "true",
}]
}
res = requests.post(url, json=data, auth=HTTPBasicAuth("admin#localhost", "asd"))
print(res)
I have k8s cluster for using gRPC service with envoy proxy, all gRPC and web request collect Envoy and passed into backend , Envoy SVC started with nlb, and nlb attached with ACM certificate. without nlb certificate request passed into backend correctly and getting response, but need to use nlb-url:443, again if I'm attaching ACM cert into nlb,and its not at all getting response. why?
Or again I need to use another ingress for treat ssl and route?
envoy-svc.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:12345676789:certificate/ss304s07-3ss2-4s73-8744-bs2sss123460
service.beta.kubernetes.io/aws-load-balancer-type: nlb
creationTimestamp: "2021-06-11T02:50:24Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
name: envoy-service
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 31156
port: 443
protocol: TCP
targetPort: 80
selector:
name: envoy
sessionAffinity: None
type: LoadBalancer
envoy-conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0, port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http2_protocol_options: {}
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: /
grpc:
route:
cluster: greeter_service
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/etc/envoy-sync/sync.pb"
ignore_unknown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: greeter_service
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: greeter_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: micro-deployment
port_value: 8081
http2_protocol_options: {} # Force HTTP/2
I’m trying o communicate from Envoy to Envoy using gRPC for Kubernetes(Amazon EKS).
I have an envoy in my sidecar and I am using grpcurl to validate the request.
The request is delivered to the application container and there are no errors, but the console returns the following results
server closed the stream without sending trailers
I don’t know what the reason for the above problem is, and what could be the reason for this result??
I was able to confirm that the response came back fine when I hit a single service before connecting with envoy.
this my envoy config
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 127.0.0.1
port_value: 10000
static_resources:
listeners:
- name: listener_secure_grpc
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service_grpc
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: cluster_grpc
max_stream_duration:
grpc_timeout_header_max: 30s
tracing: {}
http_filters:
- name: envoy.filters.http.health_check
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck"
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthz"
- name: envoy.filters.http.router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext"
common_tls_context:
tls_certificates:
- certificate_chain:
filename: /etc/ssl/grpc/tls.crt
private_key:
filename: /etc/ssl/grpc/tls.key
- name: listener_stats
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10001
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: AUTO
stat_prefix: ingress_http
route_config:
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: /stats
route:
cluster: cluster_admin
http_filters:
- name: envoy.filters.http.router
- name: listener_healthcheck
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10010
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: AUTO
stat_prefix: ingress_http
route_config: {}
http_filters:
- name: envoy.filters.http.health_check
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck"
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthz"
- name: envoy.filters.http.router
clusters:
- name: cluster_grpc
connect_timeout: 1s
type: STATIC
http2_protocol_options: {}
upstream_connection_options:
tcp_keepalive: {}
load_assignment:
cluster_name: cluster_grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 1443
- name: cluster_admin
connect_timeout: 1s
type: STATIC
load_assignment:
cluster_name: cluster_grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 10000
P.S 2021.03.19
Here's what else I found out.
When I request from the ingress host, I get the above failure, but when I request from the service, I get a normal response!
Start_orderer.sh file:
#edit *values.yaml file to be used with helm chart and deploy orderer through it
consensus_type=etcdraft
#change below instantiated variable for changing configuration of persistent volume sizes
persistence_status=true
persistent_volume_size=2Gi
while getopts "i:o:O:d:" c
do
case $c in
i) network_id=$OPTARG ;;
o) number=$OPTARG ;;
O) org_name=$OPTARG ;;
d) domain=$OPTARG ;;
esac
done
network_path=/etc/zeeve/fabric/${network_id}
source status.sh
cp ../yaml-files/orderer.yaml $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
sed -i "s/persistence_status/$persistence_status/; s/persistent_volume_size/$persistent_volume_size/; s/consensus_type/$consensus_type/; s/number/$number/g; s/org_name/${org_name}/; s/domain/$domain/; " $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
helm install orderer-${number}${org_name} --namespace blockchain-${org_name} -f $network_path/yaml-files/orderer-${number}${org_name}_values.yaml `pwd`/../helm-charts/hlf-ord
cmd_success $? orderer-${number}${org_name}
#update state of deployed componenet, used for pod level operations like start, stop, restart etc
update_statusfile helm orderer_${number}${org_name} orderer-${number}${org_name}
update_statusfile persistence orderer_${number}${org_name} $persistence_status
Configtx.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
Organizations:
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.demointainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.demointainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.demointainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.demointainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.demointainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.demointainabs.emulya.com
Port: 443
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.intainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.intainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.intainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.intainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.intainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.intainabs.emulya.com
Port: 443
Orderer: &OrdererDefaults
OrdererType: etcdraft
Addresses:
- orderer1.originator.demointainabs.emulya.com:443
- orderer2.trustee.demointainabs.emulya.com:443
- orderer2.issuer.demointainabs.emulya.com:443
- orderer1.trustee.demointainabs.emulya.com:443
- orderer1.issuer.demointainabs.emulya.com:443
- orderer1.originator.intainabs.emulya.com:443
- orderer2.trustee.intainabs.emulya.com:443
- orderer2.issuer.intainabs.emulya.com:443
- orderer1.trustee.intainabs.emulya.com:443
- orderer1.issuer.intainabs.emulya.com:443
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka-hlf.blockchain-kz.svc.cluster.local:9092
EtcdRaft:
Consenters:
- Host: orderer1.originator.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
- Host: orderer1.originator.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
Organizations:
Application: &ApplicationDefaults
Organizations:
Profiles:
BaseGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
Consortiums:
MyConsortium:
Organizations:
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
BaseChannel:
Consortium: MyConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
I am currently doing hyperledger fabric network setup in Kubernetes. My network includes, 6 organizations and 5 orderer nodes. Our orderers are made to follow raft consensus. I have done the following:
Setup ca and tlsca servers
Setup ingress controller
Generated crypto-materials for peers, orderer
Generated channel artifacts
-- Started peers and orderers
Next step is to create the channel on orderer for each orgs and join the peers in each org to the channel. I am unable to create the channel. When requesting to create the channel, getting the following error:
SERVICE UNAVAILABLE - No raft leader.
How to fix this issue??
Can anyone please guide me on this. Thanks in advance.