Teslamate SSL_ERROR_RX_RECORD_TOO_LONG - docker-compose

I'd like to have some security on my Teslamate setup.
I'd like to access it from the web.
I did create a domain name for it and I forwarded ports 3000,4000 to my Synology.
Right now I'm using the IP address locally to simplify the connection.
Currently my - VIRTUAL_HOST=IP Address of Synology.
I'm running it on a Synology NAS in Docker.
I'm trying to use traefik for a proxy
I'm getting an error saying:
SSL_ERROR_RX_RECORD_TOO_LONG
When I go to https://192.168.xxx.xxx:4000/ in my browser. (Synology IP address)
version: "3"
services:
teslamate:
image: teslamate/teslamate:latest
restart: unless-stopped
depends_on:
- db
environment:
- ENCRYPTION_KEY=${TM_ENCRYPTION_KEY}
- DATABASE_USER=${TM_DB_USER}
- DATABASE_PASS=${TM_DB_PASS}
- DATABASE_NAME=${TM_DB_NAME}
- DATABASE_HOST=db
- MQTT_HOST=mosquitto
- VIRTUAL_HOST=${FQDN_TM}
- CHECK_ORIGIN=true
# if you're going to access the UI from another machine replace
# "localhost" with the hostname / IP address of the docker host.
- TZ=${TM_TZ} # (optional) replace to use local time in debug logs. See "Configuration".
labels:
- 'traefik.enable=true'
- 'traefik.port=4000'
- "traefik.http.middlewares.redirect.redirectscheme.scheme=https"
- "traefik.http.middlewares.auth.basicauth.usersfile=/auth/.htpasswd"
- "traefik.http.routers.teslamate-insecure.rule=Host(`${FQDN_TM}`)"
- "traefik.http.routers.teslamate-insecure.middlewares=redirect"
- "traefik.http.routers.teslamate.rule=Host(`${FQDN_TM}`)"
- "traefik.http.routers.teslamate.middlewares=auth"
- "traefik.http.routers.teslamate.entrypoints=websecure"
- "traefik.http.routers.teslamate.tls.certresolver=tmhttpchallenge"
ports:
- 4000:4000
cap_drop:
- all
db:
image: postgres:14
#restart: unless-stopped
environment:
- POSTGRES_USER=${TM_DB_USER}
- POSTGRES_PASSWORD=${TM_DB_PASS}
- POSTGRES_DB=${TM_DB_NAME}
volumes:
- teslamate-db:/var/lib/postgresql/data
grafana:
image: teslamate/grafana:latest
#restart: unless-stopped
environment:
- DATABASE_USER=${TM_DB_USER}
- DATABASE_PASS=${TM_DB_PASS}
- DATABASE_NAME=${TM_DB_NAME}
- DATABASE_HOST=db
- GRAFANA_PASSWD=${GRAFANA_PW}
- GF_SECURITY_ADMIN_USER=${GRAFANA_USER}
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PW}
- GF_AUTH_BASIC_ENABLED=true
- GF_AUTH_ANONYMOUS_ENABLED=false
- GF_SERVER_DOMAIN=${FQDN_TM}
- GF_SERVER_ROOT_URL=https://${FQDN_GRAFANA}
- GF_SERVER_SERVE_FROM_SUB_PATH=true
ports:
- 3000:3000
volumes:
- teslamate-grafana-data:/var/lib/grafana
labels:
- 'traefik.enable=true'
- 'traefik.port=3000'
- "traefik.http.middlewares.redirect.redirectscheme.scheme=https"
- "traefik.http.routers.grafana-insecure.rule=Host(`${FQDN_GRAFANA}`)"
- "traefik.http.routers.grafana-insecure.middlewares=redirect"
- "traefik.http.routers.grafana.rule=Host(`${FQDN_GRAFANA}`)"
- "traefik.http.routers.grafana.entrypoints=websecure"
- "traefik.http.routers.grafana.tls.certresolver=tmhttpchallenge"
mosquitto:
image: eclipse-mosquitto:1.6
#restart: unless-stopped
command: mosquitto -c /mosquitto-no-auth.conf
ports:
- 1883:1883
- 9001:9001
volumes:
- mosquitto-conf:/mosquitto/config
- mosquitto-data:/mosquitto/data
proxy:
image: traefik:v2.7
#restart: unless-stopped
command:
- "--global.sendAnonymousUsage=false"
- "--providers.docker"
- "--providers.docker.exposedByDefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.tmhttpchallenge.acme.httpchallenge=true"
- "--certificatesresolvers.tmhttpchallenge.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.tmhttpchallenge.acme.email=${LETSENCRYPT_EMAIL}"
- "--certificatesresolvers.tmhttpchallenge.acme.storage=/etc/acme/acme.json"
#ports:
- 80:80
- 443:443
volumes:
- ./.htpasswd:/auth/.htpasswd
- ./acme/:/etc/acme/
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
teslamate-db:
teslamate-grafana-data:
mosquitto-conf:
mosquitto-data:

Related

How to get Dapr Service to Service Invocation to work when running under docker-compose?

I am receiving the following error when trying to call a service using Dapr SDK.
System.Net.Http.HttpRequestException: Connection refused (127.0.0.1:3500)
---> System.Net.Sockets.SocketException (111): Connection refused
Here is my docker-compose settings of the service I am trying to call:
quest-service:
image: ${DOCKER_REGISTRY-gamification}/quest-service:${TAG:-latest}
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
- SeqServerUrl=http://seq
build:
context: .
dockerfile: Services/LW.Gamification.QuestService/Dockerfile
ports:
- "5110:80"
- "50010:50001"
quest-service-dapr:
image: "daprio/daprd:latest"
command: ["./daprd",
"-app-id", "Quest-Service",
"-app-port", "80",
"-components-path", "/Components",
"-config", "/Configuration/config.yaml"
]
volumes:
- "./Dapr/Components/:/Components"
- "./Dapr/Configuration/:/Configuration"
depends_on:
- quest-service
network_mode: "service:quest-service"
And the settings for the caller:
player-service:
image: ${DOCKER_REGISTRY-gamification}/player-service:${TAG:-latest}
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
- SeqServerUrl=http://seq
build:
context: .
dockerfile: Services/LW.Gamificaiton.PlayerService/Dockerfile
ports:
- "5109:80"
- "50009:50001"
player-service-dapr:
image: "daprio/daprd:latest"
command: ["./daprd",
"-app-id", "Player-Service",
"-app-port", "80",
"-components-path", "/Components",
"-config", "/Configuration/config.yaml"
]
volumes:
- "./Dapr/Components/:/Components"
- "./Dapr/Configuration/:/Configuration"
depends_on:
- player-service
network_mode: "service:player-service"
And here is the code that is failing to work:
// demo service to service call
var httpClient = DaprClient.CreateInvokeHttpClient("Quest-Service");
var requestUri = $"api/v1/Quest";
var result = await httpClient.GetFromJsonAsync<IEnumerable<string>>(requestUri);
Note: Messaging is working fine. :-)
I am new to Dapr so I must be doing something silly wrong, maybe something to do with ports.. I just don't know!
From following this question :Dapr Client Docker Compose Issue
I managed to get this partly working using the following docker-compose config:
services:
placement:
image: "daprio/dapr"
command: ["./placement", "-port", "50000", "-log-level", "debug"]
ports:
- "50000:50000"
quest-service:
image: ${DOCKER_REGISTRY-gamification}/quest-service:${TAG:-latest}
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
- SeqServerUrl=http://seq
- DAPR_GRPC_PORT=50010
build:
context: .
dockerfile: Services/LW.Gamification.QuestService/Dockerfile
ports:
- "5110:80"
- "50010:50010"
depends_on:
- placement
- rabbitmq
- redis
- seq
- zipkin
quest-service-dapr:
image: "daprio/daprd:latest"
command: ["./daprd",
"-app-id", "Quest-Service",
"-app-port", "80",
"-placement-host-address", "placement:50000",
"-dapr-grpc-port", "50010",
"-components-path", "/Components",
"-config", "/Configuration/config.yaml"
]
volumes:
- "./Dapr/Components/:/Components"
- "./Dapr/Configuration/:/Configuration"
depends_on:
- quest-service
network_mode: "service:quest-service"
generatetraffic:
image: ${DOCKER_REGISTRY-gamification}/generatetraffic:${TAG:-latest}
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
- SeqServerUrl=http://seq
- DAPR_GRPC_PORT=50017
build:
context: .
dockerfile: Services/LW.Gamification.GenerateTraffic/Dockerfile
ports:
- "5117:80"
- "50017:50017"
depends_on:
- placement
- rabbitmq
- redis
- seq
- zipkin
generatetraffic-dapr:
image: "daprio/daprd:latest"
command: ["./daprd",
"-app-id", "Generate-Traffic",
"-app-port", "80",
"-placement-host-address", "placement:50000",
"-dapr-grpc-port", "50017",
"-components-path", "/Components",
"-config", "/Configuration/config.yaml"
]
volumes:
- "./Dapr/Components/:/Components"
- "./Dapr/Configuration/:/Configuration"
depends_on:
- generatetraffic
network_mode: "service:generatetraffic"
However I still have issues with some of the documented APIs not working!.
var httpClient = DaprClient.CreateInvokeHttpClient("Quest-Service");
var requestUri = $"api/v1/Quest";
var result = await httpClient.GetAsync(requestUri);
Still fails?

fabric 2.3.2 : network.sh cannot create all the container

creating container
I wrote a docker-compose.yaml to deploy 4 peers of org1 and 1 peer of org2, it worked. But when I write a docker-compose-100peer.yaml file to start 100 peers of org1 and 1 peer of org2, it always stuck in the situation shown in the picture. It never starts more than 30 peer of org1 even if I wait for a whole afternoon and a whole night. The original yaml file (docker-compose-test-net.yaml) dont't limit the memory an cpu resource.
peer5.org1.example.com:
container_name: peer5.org1.example.com
image: hyperledger/fabric-peer:latest
labels:
service: hyperledger-fabric
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric_test
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_ID=peer5.org1.example.com
- CORE_PEER_ADDRESS=peer5.org1.example.com:6010
- CORE_PEER_LISTENADDRESS=0.0.0.0:6010
- CORE_PEER_CHAINCODEADDRESS=peer5.org1.example.com:6011
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:6011
- CORE_PEER_GOSSIP_BOOTSTRAP=peer5.org1.example.com:6010
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer5.org1.example.com:6010
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:16010
volumes:
- ${DOCKER_SOCK}/:/host/var/run/docker.sock
- ../organizations/peerOrganizations/org1.example.com/peers/peer5.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../organizations/peerOrganizations/org1.example.com/peers/peer5.org1.example.com/tls:/etc/hyperledger/fabric/tls
- peer5.org1.example.com:/var/hyperledger/production
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 6010:6010
- 16010:16010
networks:
- test
peer6.org1.example.com:
container_name: peer6.org1.example.com
image: hyperledger/fabric-peer:latest
labels:
service: hyperledger-fabric
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric_test
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_ID=peer6.org1.example.com
- CORE_PEER_ADDRESS=peer6.org1.example.com:6012
- CORE_PEER_LISTENADDRESS=0.0.0.0:6012
- CORE_PEER_CHAINCODEADDRESS=peer6.org1.example.com:6013
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:6013
- CORE_PEER_GOSSIP_BOOTSTRAP=peer6.org1.example.com:6012
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer6.org1.example.com:6012
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:16012
volumes:
- ${DOCKER_SOCK}/:/host/var/run/docker.sock
- ../organizations/peerOrganizations/org1.example.com/peers/peer6.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../organizations/peerOrganizations/org1.example.com/peers/peer6.org1.example.com/tls:/etc/hyperledger/fabric/tls
- peer6.org1.example.com:/var/hyperledger/production
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 6012:6012
- 16012:16012
networks:
- test
......
cli:
container_name: cli
image: hyperledger/fabric-tools:latest
labels:
service: hyperledger-fabric
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- ../organizations:/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations
- ../scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
depends_on:
- peer0.org1.example.com
- peer0.org2.example.com
- peer1.org1.example.com
- peer2.org1.example.com
- peer3.org1.example.com
- peer4.org1.example.com
- peer5.org1.example.com
- peer6.org1.example.com
- peer7.org1.example.com

mailcow + jwilder reverse proxy

I try to set up my own mailserver, Mailcow was recommended.
DNS-provider:
Cloudflare with
CNAME mail.examle.com => examle.com, proxied
Because it is proxies, I cannot use normal ports like mentioned in the docs. Therefore I have to setup some forwarding...
Router:
Fritzbox with port forwadring
2052 => 25
2053 => 465
8080 => 587
2082 => 143
2083 => 993
2086 => 110
2087 => 995
8880 => 4190
Docker:
I use jwilders reverse proxy and it's LE-companion, which works well with everything else I have hosted so far.
${DOCKERDIR}/docker-compose-js.yml
version: '3'
services:
proxy:
build: ./reverse_proxy
container_name: proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ${DOCKERDIR}/reverse_proxy/certs:/etc/nginx/certs:ro
- ${DOCKERDIR}/reverse_proxy/vhost.d:/etc/nginx/vhost.d
- ${DOCKERDIR}/reverse_proxy/html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
- PUID=33
- PGID=33
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: ""
networks:
- proxy-tier
depends_on:
- le
le:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: le
volumes:
- ${DOCKERDIR}/reverse_proxy/certs:/etc/nginx/certs:rw
- ${DOCKERDIR}/reverse_proxy/vhost.d:/etc/nginx/vhost.d
- ${DOCKERDIR}/reverse_proxy/html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- PUID=33
- PGID=33
- DEFAULT_EMAIL=*****
- NGINX_PROXY_CONTAINER=proxy
networks:
- proxy-tier
networks:
proxy-tier:
Then there is a (slightly) modified file for mailcow, just mentioning the changes
%{DOCKERDIR}/mailcow/docker-compose.yml
nginx-mailcow:
...
# ports:
# - "${HTTPS_BIND:-0.0.0.0}:${HTTPS_PORT:-443}:${HTTPS_PORT:-443}"
# - "${HTTP_BIND:-0.0.0.0}:${HTTP_PORT:-80}:${HTTP_PORT:-80}"
...
There seems to be no way to remove those ports from it's original docker-compose.yml despite it not being recommended.
For all other changes I got
${DOCKERDIR}/mailcow/docker-compose-override.yml
version: '2.1'
services:
nginx-mailcow:
networks:
proxy-tier:
environment:
- VIRTUAL_HOST=${MAILCOW_HOSTNAME},${ADDITIONAL_SAN}
- VIRTUAL_PORT=8080
- VIRTUAL_PROTO=http
- LETSENCRYPT_HOST=${MAILCOW_HOSTNAME},${ADDITIONAL_SAN}
volumes:
- ${DOCKERDIR}/reverse_proxy/certs/${MAILCOW_HOSTNAME}:/etc/ssl/mail/
- ${DOCKERDIR}/reverse_proxy/certs/dhparam.pem:/etc/ssl/mail/dhparams.pem:ro
ports:
dovecot-mailcow:
volumes:
- ${DOCKERDIR}/reverse_proxy/certs/${MAILCOW_HOSTNAME}:/etc/ssl/mail/
- ${DOCKERDIR}/reverse_proxy/certs/dhparam.pem:/etc/ssl/mail/dhparams.pem:ro
postfix-mailcow:
volumes:
- ${DOCKERDIR}/reverse_proxy/certs/${MAILCOW_HOSTNAME}:/etc/ssl/mail/
- ${DOCKERDIR}/reverse_proxy/certs/dhparam.pem:/etc/ssl/mail/dhparams.pem:ro
networks:
proxy-tier:
And finally the mailcow.conf (changes only)
${DOCKERDIR}/mailcow/mailcow.conf
MAILCOW_HOSTNAME=mail.example.com
HTTP_PORT=8080
#HTTP_BIND=0.0.0.0
HTTP_BIND=proxy
HTTPS_PORT=8443
#HTTPS_BIND=0.0.0.0
HTTPS_BIND=proxy
SKIP_LETS_ENCRYPT=y
When I try to connect to mail.example.com I get Error 526 Invalid SSL certificate.
Could someone pls show me where my config is wrong and how to change it so I get mailcow working?

Where to include core.yaml in Hyperledger Fabric?

I am working on Hyperledger fabric and trying to retrieve historical transaction records from the network. So, i found core.yaml config to enable ledger historic database. But, i don't find where to include the "core.yaml" in the application source repository.
I found few clues to add the file in docker-compose.yaml as
CORE_VM_ENDPOINT=core.yaml
So, is it the correct way of adding the "core.yaml" in the docker-compose.yaml file.?
docker-compose.yaml
version: '2'
services:
ca.org1.example.com:
image: ${FABRIC_DOCKER_REGISTRY}${FABRIC_CA_FIXTURE_IMAGE}:${ARCH}${ARCH_SEP}${FABRIC_CA_FIXTURE_TAG}
hostname: ca.org1.example.com
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.org1.example.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/0427fbe1849b3e146f05201e1d8c5e570337faaaa19ed37deda69bb7c88c71ef_sk
- FABRIC_CA_SERVER_CFG_AFFILIATIONS_ALLOWREMOVE=true
- FABRIC_CA_SERVER_CFG_IDENTITIES_ALLOWREMOVE=true
- FABRIC_CA_SERVER_PORT=7054
- FABRIC_CA_SERVER_TLS_ENABLED=false
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/0427fbe1849b3e146f05201e1d8c5e570337faaaa19ed37deda69bb7c88c71ef_sk
ports:
- 7054:7054
expose:
- 7054
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/0427fbe1849b3e146f05201e1d8c5e570337faaaa19ed37deda69bb7c88c71ef_sk -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
couchdb.peer0.org1.example.com:
image: ${FABRIC_DOCKER_REGISTRY}${FABRIC_COUCHDB_FIXTURE_IMAGE}:${ARCH}${ARCH_SEP}${FABRIC_COUCHDB_FIXTURE_TAG}
hostname: couchdb.peer0.org1.example.com
environment:
- COUCHDB_USER=admin
- COUCHDB_PASSWORD=adminpw
ports:
- 5984:5984
expose:
- 5984
peer0.org1.example.com:
image: ${FABRIC_DOCKER_REGISTRY}${FABRIC_PEER_FIXTURE_IMAGE}:${ARCH}${ARCH_SEP}${FABRIC_PEER_FIXTURE_TAG}
hostname: peer0.org1.example.com
environment:
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_KEY_FILE=/var/hyperledger/tls/server.key
- CORE_PEER_TLS_CERT_FILE=/var/hyperledger/tls/server.crt
- CORE_PEER_TLS_ROOTCERT_FILE=/var/hyperledger/tls/ca.crt
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_PEER_LISTENADDRESS=0.0.0.0:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp
- CORE_LOGGING_PEER=info
- CORE_LOGGING_CAUTHDSL=warning
- CORE_LOGGING_GOSSIP=warning
- CORE_LOGGING_LEDGER=info
- CORE_LOGGING_MSP=warning
- CORE_LOGGING_POLICIES=warning
- CORE_LOGGING_GRPC=error
- CORE_CHAINCODE_LOGGING_SHIM=info
- CORE_CHAINCODE_LOGGING_LEVEL=info
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_CHAINCODE_BUILDER
- CORE_CHAINCODE_GOLANG_RUNTIME
- CORE_CHAINCODE_EXECUTETIMEOUT=120s
- CORE_PEER_NETWORKID=multiorgledger
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=admin
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=adminpw
- CORE_PEER_ID=peer0.org1.example.com
- CORE_VM_DOCKER_ATTACHSTDOUT=true
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb.peer0.org1.example.com:5984
- CORE_PEER_TLS_SERVERHOSTOVERRIDE=peer0.org1.example.com
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.example.com:8051
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/var/hyperledger/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/var/hyperledger/tls
- core.yaml:/var/hyperledger/config
ports:
- 7051:7051
- 7053:7051
couchdb.peer1.org1.example.com:
image: ${FABRIC_DOCKER_REGISTRY}${FABRIC_COUCHDB_FIXTURE_IMAGE}:${ARCH}${ARCH_SEP}${FABRIC_COUCHDB_FIXTURE_TAG}
hostname: couchdb.peer1.org1.example.com
environment:
- COUCHDB_USER=admin
- COUCHDB_PASSWORD=adminpw
ports:
- 6984:5984
peer1.org1.example.com:
image: ${FABRIC_DOCKER_REGISTRY}${FABRIC_PEER_FIXTURE_IMAGE}:${ARCH}${ARCH_SEP}${FABRIC_PEER_FIXTURE_TAG}
hostname: peer1.org1.example.com
environment:
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_KEY_FILE=/var/hyperledger/tls/server.key
- CORE_PEER_TLS_CERT_FILE=/var/hyperledger/tls/server.crt
- CORE_PEER_TLS_ROOTCERT_FILE=/var/hyperledger/tls/ca.crt
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_PEER_LISTENADDRESS=0.0.0.0:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp
- CORE_LOGGING_PEER=info
- CORE_LOGGING_CAUTHDSL=warning
- CORE_LOGGING_GOSSIP=warning
- CORE_LOGGING_LEDGER=info
- CORE_LOGGING_MSP=warning
- CORE_LOGGING_POLICIES=warning
- CORE_LOGGING_GRPC=error
- CORE_CHAINCODE_LOGGING_SHIM=info
- CORE_CHAINCODE_LOGGING_LEVEL=info
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_CHAINCODE_BUILDER
- CORE_CHAINCODE_GOLANG_RUNTIME
- CORE_CHAINCODE_EXECUTETIMEOUT=120s
- CORE_PEER_NETWORKID=multiorgledger
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=admin
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=adminpw
- CORE_PEER_ID=peer1.org1.example.com
- CORE_VM_DOCKER_ATTACHSTDOUT=true
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:7051
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb.peer1.org1.example.com:5984
- CORE_PEER_TLS_SERVERHOSTOVERRIDE=peer1.org1.example.com
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/var/hyperledger/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/var/hyperledger/tls
ports:
- 8051:7051
- 8053:7051
Please suggest me some solution
NO its not.
CORE_VM_ENDPOINT
starts chaincode containers on the same bridge network as the peers.
You can mount the path of the folder containing core.yaml file on the peer container
If you are using fabric samples go to bellow path
fabric-samples/firest-network/base/
add the bellow code to the volumes section of peer-base.yaml if you want all peers to have history enabled OR add to the volumes of selective peers in docker-compose-base.yaml file
- "Path/to/the/folder/where/core.yaml/exists":/var/hyperledger/config
If any doubts do revert.

Single Sign on Keyrock-Grafana doesn't work

I'm trying to use Keyrock to offer Single Sign-on on different platforms. Specifically, I want to offer that service in Grafana. I've seen the configuration to be changed in Grafana and my docker-compose is like this:
version: "3.1"
services:
grafana:
image: grafana/grafana:5.1.0
ports:
- 3000:3000
networks:
default:
ipv4_address: 172.18.1.4
environment:
- GF_AUTH_GENERIC_OAUTH_CLIENT_ID=90be8de5-69dc-4b9a-9cc3-962cca534410
- GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=9e98964b-5043-4086-9657-51f1d8c11fe0
- GF_AUTH_GENERIC_OAUTH_ENABLED=true
- GF_AUTH_GENERIC_OAUTH_AUTH_URL=http://172.18.1.5:3005/oauth2/authorize
- GF_AUTH_GENERIC_OAUTH_TOKEN_URL=http://172.18.1.5:3005/oauth2/token
- GF_AUTH_GENERIC_OAUTH_API_URL=http://172.18.1.5:3005/v1/users
- GF_AUTH_GENERIC_OAUTH_ALLOW_SIGN_UP = true
- GF_Server_DOMAIN=172.18.1.4
- GF_Server_ROOT_URL=http://172.18.1.4:3000
keyrock:
image: fiware/idm:7.5.1
container_name: fiware-keyrock
hostname: keyrock
networks:
default:
ipv4_address: 172.18.1.5
depends_on:
- mysql-db
ports:
- "3005:3005"
- "3443:3443"
environment:
- DEBUG=idm:*
- DATABASE_HOST=mysql-db
- IDM_DB_PASS_FILE=/run/secrets/my_secret_data
- IDM_DB_USER=root
- IDM_HOST=http://localhost:3005
- IDM_PORT=3005
- IDM_HTTPS_ENABLED=false
- IDM_HTTPS_PORT=3443
- IDM_ADMIN_USER=admin
- IDM_ADMIN_EMAIL=admin#test.com
- IDM_ADMIN_PASS=test
secrets:
- my_secret_data
healthcheck:
test: curl --fail -s http://localhost:3005/version || exit 1
mysql-db:
restart: always
image: mysql:5.7
hostname: mysql-db
container_name: db-mysql
expose:
- "3306"
ports:
- "3306:3306"
networks:
default:
ipv4_address: 172.18.1.6
environment:
- "MYSQL_ROOT_PASSWORD_FILE=/run/secrets/my_secret_data"
- "MYSQL_ROOT_HOST=172.18.1.5"
volumes:
- mysql-db-sso:/var/lib/mysql
- ./mysql-data:/docker-entrypoint-initdb.d/:ro
secrets:
- my_secret_data
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mysql-db-sso:
secrets:
my_secret_data:
file: ./secrets.txt
I have the Grafana application registered in Keyrock and has as callback http://172.18.1.4:3000/login. When I try to Sign-in in Grafana through Oauth it redirects me to the keyrock page to Sign-in, but when entering the credentials it returns me an invalid client_id, but it is the same one that returns Keyrock to me when obtaining the application information.
Is it possible that I lack something to configure or should it be done in another way?
Here is the working configuration for Keyrock 7.5.1 and Grafana 6.0.0
Grafana:
[auth.generic_oauth]
enabled = true
allow_sign_up = true
client_id = ${CLIENT_ID}
client_secret = ${CLIENT_SECRET}
scopes = permanent
auth_url = ${KEYROCK_URL}/oauth2/authorize
token_url = ${KEYROCK_URL}/oauth2/token
api_url = ${KEYROCK_URL}/user
App in Keyrock:
url - ${GRAFANA_ROOT_URL}
callback_url - ${GRAFANA_ROOT_URL}/login/generic_oauth
Token types - Permanent
So you need to fix env variable
GF_AUTH_GENERIC_OAUTH_API_URL
to
http://172.18.1.5:3005/user
and callback url
http://172.18.1.4:3000/login
to
http://172.18.1.4:3000/login/generic_oauth
and add oauth2 scopes