I try to create application channel. I launch CLI containers for peers with this docker-compose file, for example:
version: '2'
networks:
fabric-ca:
services:
cli-org1:
container_name: cli-org1
image: hyperledger/fabric-tools:2.4
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_ID=cli-org1
- CORE_PEER_ADDRESS=peer1-org1:7051
- CORE_PEER_LOCALMSPID=org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_ROOTCERT_FILE=/tmp/hyperledger/org1/peer1/tls-msp/tlscacerts/tls-0-0-0-0-7052.pem
- CORE_PEER_MSPCONFIGPATH=/tmp/hyperledger/org1/peer1/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/org1
command: sh
volumes:
- ./peer1:/tmp/hyperledger/org1/peer1
- ./peer1/assets/chaincode:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode
- ./admin:/tmp/hyperledger/org1/admin
networks:
- fabric-ca
Then I go to the container and run:
export CORE_PEER_MSPCONFIGPATH=/tmp/hyperledger/org1/admin/msp
peer channel create -c mychannel -f /tmp/hyperledger/org1/peer1/assets/channel.tx -o orderer1-org0:7050 --outputBlock /tmp/hyperledger/org1/peer1/assets/mychannel.block --tls true --cafile /tmp/hyperledger/org1/peer1/tls-msp/tlscacerts/tls-0-0-0-0-7052.pem
I receive this error:
InitCmd -> Cannot run peer because error when setting up MSP of type bccsp from directory /tmp/hyperledger/org1/admin/msp: administrators must be declared when no admin ou classification is set
I didn't set admin ou classification. Help me please to solve my problem.
Related
I am trying to connect via this documentation of GraphQL Mesh the ParseServer which is a docker container to Mesh via the .meshrc.yaml file. But I am not sure how to send the MASTER Key and Application ID in order to do the proper connection.
What I've tried already is:
schemaHeaders
operationHeaders
Neither of them worked. When I am running the graphql_mesh from my docker-compose I am getting the fallowing error:
Failed to generate the schema Error: Failed to fetch introspection from http://localhost:1337/graphql: Error: connect ECONNREFUSED 127.0.0.1:1337 2023-02-10T13:38:52.347296144Z at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
Below I my docker files and my .meshrc.yaml.
docker-compose.yml file:
version: '3.9'
services:
database:
image: mongo:6.0.4
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
volumes:
- ${HOME}/_DOCKER_DATA_/database:/data/db
networks:
- my_network
server:
restart: always
image: parseplatform/parse-server:5.4.1
ports:
- 1337:1337
environment:
- PARSE_SERVER_APPLICATION_ID=COOK_APP
- PARSE_SERVER_APPLICATION_NAME=COOK_NAME
- PARSE_SERVER_MASTER_KEY=MASTER_KEY_1
- PARSE_SERVER_DATABASE_URI=mongodb://admin:admin#mongo/parse_server?authSource=admin
- PARSE_SERVER_URL=http://10.0.2.2:1337/parse
- PARSE_SERVER_MOUNT_GRAPHQL=true
links:
- database:mongo
volumes:
- ${HOME}/_DOCKER_DATA_/server:/data/server
networks:
- my_network
dashboard:
image: parseplatform/parse-dashboard:5.0.0
ports:
- "4040:4040"
depends_on:
- server
environment:
- PARSE_DASHBOARD_APP_ID=COOK_APP
- PARSE_DASHBOARD_MASTER_KEY=MASTER_KEY_1
- PARSE_DASHBOARD_USER_ID=admin
- PARSE_DASHBOARD_USER_PASSWORD=admin
- PARSE_DASHBOARD_ALLOW_INSECURE_HTTP=true
- PARSE_DASHBOARD_SERVER_URL=http://localhost:1337/parse
- PARSE_DASHBOARD_GRAPHQL_SERVER_URL=http://localhost:1337/graphql
volumes:
- ${HOME}/_DOCKER_DATA_/dashboard:/data/dashboard
networks:
- my_network
graphql_mesh:
build:
context: .
dockerfile: Dockerfile_graphql_mesh
volumes:
- ./work/.meshrc.yaml:/work/.meshrc.yaml
ports:
- "4000:4000"
stdin_open: true
tty: true
networks:
- my_network
networks:
my_network:
driver: bridge
The Dockerfile_graphql_mesh:
FROM node:19.6.0-alpine3.17
COPY work /work
WORKDIR /work
RUN yarn add #graphql-mesh/cli
RUN yarn add graphql
RUN yarn add #graphql-mesh/graphql
RUN yarn add #graphql-mesh/runtime
RUN yarn add #envelop/auth0
CMD yarn run mesh dev
.meshrc.yaml:
sources:
- name: ParseServer_3
handler:
graphql:
endpoint: http://localhost:1337/graphql
schemaHeaders:
X-Parse-Application-Id: 'COOK_APP'
X-Parse-Master-Key: 'MASTER_KEY_1'
serve:
playground: true
I am trying to get the propper conection and generation of schema.grapql via the GraphQL Mesh.
I am trying to run Keycloak 18 with postgres 10.21
Here is my docker compose
version: "3.5"
services:
keycloaksvc:
image: quay.io/keycloak/keycloak:18.0
user: '1000:1000'
container_name: "testkc"
environment:
- DB_VENDOR=postgres
- DB_ADDR=postgressvc
- DB_DATABASE=keycloak
- DB_PORT=5432
- DB_SCHEMA=public
- DB_USER=KcUser
- DB_PASSWORD=KcPass
- KC_HOSTNAME=localhost
- ROOT_LOGLEVEL=DEBUG
- PROXY_ADDRESS_FORWARDING=true
- REDIRECT_SOCKET=proxy-https
- KEYCLOAK_LOGLEVEL=DEBUG
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=testing
volumes:
- ./ssldir:/etc/x509/https
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/passwd:/etc/passwd:ro"
- ./kcthemes:/opt/keycloak/themes
entrypoint: /opt/keycloak/bin/kc.sh start --auto-build --hostname-strict-https=false --http-relative-path=/auth --features=token-exchange --https-certificate-file=/etc/x509/https/tls.crt --https-certificate-key-file=/etc/x509/https/tls.key
network_mode: "host"
depends_on:
- postgressvc
postgressvc:
image: postgres:10.21-alpine
user: '1000:1000'
container_name: "kc_postgres"
environment:
- POSTGRES_DB=keycloak
- POSTGRES_USER=KcUser
- POSTGRES_PASSWORD=KcPass
volumes:
- ./pgdta:/var/lib/postgresql/data
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/passwd:/etc/passwd:ro"
network_mode: "host"
It runs fine and I can get to admin console https://localhost:8443/auth/admin
I can also add new realm and users. However I do not see any data in postgres. If I make change in docker-compose file and restart, all the realms and users are lost
Exact same postgres setup works fine with image: jboss/keycloak:16.1.1
What setup am I missing for keycloak 18 ?
I am also facing the same issue with keycloak v19.0.0 . It was storing data in memory.
But with below configuration able to store data in postgres.
keycloak:
container_name: keycloak
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://localhost:5432/keycloak
KC_DB_USERNAME: postgres
KC_DB_PASSWORD: user
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_HOSTNAME_STRICT: false
KC_EDGE: proxy
ports:
- 8080:8080
image: quay.io/keycloak/keycloak:19.0.0
network_mode: host
depends_on:
- postgres
command:
- start-dev --auto-build
Keycloak from version 17 has major changes (it is based on the Quarkus) and also config has been changed. So don't use config, which is working with Keycoak 16, but check the current Keycloak doc, e.g. https://www.keycloak.org/server/containers
You will find that DB env variables are now:
KC_DB_URL,KC_DB_USERNAME,KC_DB_PASSWORD,...
Also other env variables have been changed, so it is not only about DB env variables.
Docker-compose is not running and I don't know why. There's this question of
chown: changing ownership of '/var/lib/postgresql/data': Operation not permitted
At a suggestion of a member on the docker community slack channel I installed the homebrew for Docker, but that hasn't managed to solve the problem. There was another stackoverflow post that suggested cc'ing inside the container and changing the permissions, but that doesn't make sense to me - the /var/lib/postgresql/data is created on startup.
Here is the docker-compose file -
version: "3.9"
services:
db:
restart: always
image: postgres
volumes:
- ./docker-entrypoint-initdb.d/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_NAME=dev-postgres
- POSTGRES_USER=pixel
- POSTGRES_DATABASE=lightchan
- POSTGRES_PASSWORD=stardust
web:
build: .
restart: always
volumes:
- .:/code
command: sh -c "./waitfor.sh db:5432 -- python3 manage.py runserver"
ports:
- "8001:8001"
environment:
- POSTGRES_NAME=dev-postgres
- POSTGRES_USER=pixel
- POSTGRES_DATABASE=lightchan
- POSTGRES_PASSWORD=stardust
- POSTGRES_HOST=db
I try to set up a Dockerized airflow instance, but whatever I do (so far..) it keeps trying to access some sqlite3 database where I do not know where the instruction comes from. I point to the Postgres instance everywhere (deemed) possible through AIRFLOW__CORE__SQL_ALCHEMY_CONN, and even AIRFLOW_CONN_METADATA_DB.
A typical error message when starting up is like:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: job
Full docker-compose.yml:
version: '3'
x-airflow-common:
&airflow-common
image: apache/airflow:2.0.0
environment:
- AIRFLOW__CORE__EXECUTOR=LocalExecutor
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW__CORE__FERNET_KEY=FB0o_zt4e3Ziq3LdUUO7F2Z95cvFFx16hU8jTeR1ASM=
- AIRFLOW__CORE__LOAD_EXAMPLES=True
- AIRFLOW__CORE__LOGGING_LEVEL=INFO
volumes:
- /home/x/docker/airflow/dags:/opt/airflow/dags
- /home/x/docker/airflow/airflow-data/logs:/opt/airflow/logs
- /home/x/docker/airflow/airflow-data/plugins:/opt/airflow/plugins
- /home/x/docker/airflow/airflow-data/airflow.cfg:/opt/airlfow/airflow.cfg
depends_on:
- db
services:
db:
image: postgres:12
#image: postgres:12.1-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=airflow
- POSTGRES_PORT=9501
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- 9501:9501
command:
- -p 9501
airflow-init:
<< : *airflow-common
container_name: airflow_init
entrypoint: /bin/bash
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
command:
- -c
- airflow users list || ( airflow db init &&
airflow users create
--role Admin
--username airflow
--password airflow
--email airflow#airflow.com
--firstname airflow
--lastname airflow )
restart: on-failure
airflow-webserver:
<< : *airflow-common
command: airflow webserver
ports:
- 9500:8080
container_name: airflow_webserver
environment:
- AIRFLOW_USERNAME=airflow
- AIRFLOW_PASSWORD=airflow
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
airflow-scheduler:
<< : *airflow-common
command: airflow scheduler
container_name: airflow_scheduler
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
Solved by following this docker-compose.yaml file:
https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml
And instead of trying to tweak the ports of postgres (and redis) used the "expose" option, which avoids conflicts with other containers on the same host.
So not:
environment:
POSTGRES_PORT: 9501
ports:
- 9501:9501
But: run it (internally) with the default ports and do not try to share them external:
expose:
- 5432
Still not sure what was the problem with using the higher ports. It may be some default fallback to sqlite when the configured DB for some reason cannot be connected.
I've got the following docker-compose file and it serves up the application on port 80 fine.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static:
postgres_data:
Once in there I can log into the admin and add an extra user which gets saved to the database as I can reload the page and the user is still there. Once I stop the backend docker container however that user is gone. Given Postgres is operating in a different container and I'm not bringing it down I'm unsure how stopping the backend container and restarting it is causing the data not to be available.
Thanks in advance.
EDIT:
I'm bringing up the docker container with the following command.
docker-compose -f docker-compose.prod.yml up -d
I'm bringing down the container by just using docker desktop and stopping the container.
I'm running DJANGO 3 for the backend and I've also tried adding a superuser in the terminal when the container is running:
# python manage.py createsuperuser
Username (leave blank to use 'root'): mikey
Email address:
Password:
Password (again):
This password is too common.
Bypass password validation and create user anyway? [y/N]: y
Superuser created successfully.
Which works and the user appears while the container is running. However, once again when I shut the container down via docker desktop and then restart it that user that was just created is gone.
FURTHER EDIT:
settings.py using dotenv "from dotenv import load_dotenv"
DATABASES = {
"default": {
"ENGINE": os.getenv("SQL_ENGINE"),
"NAME": os.getenv("SQL_DATABASE"),
"USER": os.getenv("SQL_USER"),
"PASSWORD": os.getenv("SQL_PASSWORD"),
"HOST": os.getenv("SQL_HOST"),
"PORT": os.getenv("SQL_PORT"),
}
}
with the .env.prod file having the following values:
DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=db
SQL_PORT=5432
SOLUTION:
Read the comments to see the diagnosis by other legends but updated docker-compose file looks like this. Note the "depends_on" block.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
depends_on:
- db
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- "5432"
volumes:
static:
postgres_data:
FINAL EDIT:
Added the following code to my entrypoint.sh file to ensure Postgres is ready to accept connections by the backend container.
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi