I have a working docker based setup - peer(s), orderers and explorer (db & app) which I am aiming to deployed on GCP - Kubernetes.
For the peer(s) and orderer I have used the docker images and created kubernetes yaml file with (StatefulSet, Service, NodePort and Ingress) to deploy on Kubernetes.
For Explorer I have the below docker-compose file which depends on my local connection-profile and crypto files.
I am struggling to deploy explorer on kubernetes and looking for advice on the approach
I have tried to convert docker-compose using Kompose - but face issues while translating network and health-check tags.
I have tried to create a single docker-image (Dockerfile - multiple FROM tags) from hyperledger/explorer-db:latest and hyperledger/explorer:latest but again specifying network becomes an issue.
Any suggestions or examples on how Explorer can be deployed in the cluster ??
Thanks
Explorer Docker Compose
version: '2.1'
volumes:
pgdata:
walletstore:
networks:
mynetwork.com:
external:
name: my-netywork
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:latest
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- mynetwork.com
explorer.mynetwork.com:
image: hyperledger/explorer:latest
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
environment:
- DATABASE_HOST=explorerdb.mynetwork.com
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=info
- LOG_LEVEL_DB=info
- LOG_LEVEL_CONSOLE=debug
- LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ../config/crypto-config:/tmp/crypto
- walletstore:/opt/explorer/wallet
ports:
- 8080:8080
depends_on:
explorerdb.mynetwork.com:
condition: service_healthy
networks:
- mynetwork.com
Explorer Dockerfile - multiple froms
# Updated to Fabric 2.x
#1. Docker file for setting up the Orderer
# FROM hyperledger/fabric-orderer:1.4.2
FROM hyperledger/explorer-db:latest
ENV DATABASE_DATABASE=fabricexplorer
ENV DATABASE_USERNAME=hppoc
ENV DATABASE_PASSWORD=password
FROM hyperledger/explorer:latest
COPY ./config/explorer/. /opt/explorer/
COPY ./config/crypto-config/. /tmp/crypto
ENV DATABASE_HOST=explorerdb.xxx.com
ENV DATABASE_DATABASE=fabricexplorer
ENV DATABASE_USERNAME=hppoc
ENV DATABASE_PASSWD=password
ENV LOG_LEVEL_APP=info
ENV LOG_LEVEL_DB=info
ENV LOG_LEVEL_CONSOLE=debug
ENV LOG_CONSOLE_STDOUT=true
ENV DISCOVERY_AS_LOCALHOST=false
ENV DISCOVERY_AS_LOCALHOST=false
# ENV EXPLORER_APP_ROOT=${EXPLORER_APP_ROOT:-dist}
# ENV ${EXPLORER_APP_ROOT}/main.js name - hyperledger-explorer
ENTRYPOINT ["tail", "-f", "/dev/null"]
There are 2 groups of required steps for this setup. One I tested is:
1.Create a K8s cluster
2.Connect your cluster with the cloud shell
3.Clone this repository
git clone https://github.com/acloudfan/HLF-K8s-Cloud.git
4.Setup the storage class
cd HLF-K8s-Cloud/gcp kubectl apply -f . This will setup the storage class
5.Launch the Acme Orderer
cd .. kubectl apply -f ./k8s-acme-orderer.yaml Check the logs for 'acme-orderer-0' to ensure there is no error
6.Launch the Acme Peer
kubectl apply -f ./k8s-acme-peer.yaml Check the logs for 'acme-peer-0' to ensure there is no error
7.Setup the Channel & Join acme peer to it.
kubectl exec -it acme-peer-0 /bin/bash ./submit-channel-create.sh
./join-channel.sh
Ensure that peer has joined the channel
peer channel list
exit
8.Launch the budget Peer and join it to the channel
kubectl apply -f ./k8s-budget-peer.yaml Wait for the container to launch & check the logs for errors
kubectl exec -it budget-peer-0 /bin/bash ./fetch-channel-block.sh ./join-channel.sh
Ensure that peer has joined the channel
peer channel list
exit ** At this point your K8s Fabric Network is up **
Validate the network
1.Install & Instantiate the test chaincode
kubectl exec -it acme-peer-0 /bin/bash
./cc-test.sh install ./cc-test.sh instantiate
2.Invoke | Query the chaincode to see the changes in values of a/b
./cc-test.sh query ./cc-test.sh invoke
3.Check the values inside the Budget peer
kubectl exec -it acme-peer-0 /bin/bash
./cc-test.sh install
./cc-test.sh query The query should return the same values as you see in acme-peer Execute invoke/query in both peers to validate
Plus, you can visit the following threads to see option 2 and more references on the proper steps to set up your environment Production Network with GKE, HLF-K8s-Cloud, Hyperledger Fabric blockchain deployment on Google Kubernetes Engine and hyperledger/fabric-peer.
Related
In my Kubernetes cluster I am running GitLab-ee 15.8.0 with a GitLab Runner. This runner is configured for a kubernetes executor and I have mounted the /var/run/docker.sock to this runner in the configmap. When running a pipeline which brings up a docker-compose-test.yml, I can see that all pods that exist in kubernetes are starting to crash and are getting restarted. After that I can see that the pipeline is still in the Running state, but nor runner is working on it. The last command the runner executed in the pipeline was: docker-compose -f docker-compose-test.yml up -d.
I expected the pipeline to just bring up the docker containers and run the Laravel tests using the database container and the application container, but instead it messes up the Nginx-Ingress resource.
I am running GitLab-ee:15.8.0 with the gitlab-runner version 15.8.2
Here is the gitlab-ci.yml:
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_COMPOSE_CMD: "docker-compose -f docker-compose-test.yml"
stages:
- test
- build
test:
stage: test
script:
- docker-compose --version
- $DOCKER_COMPOSE_CMD down --volumes --remove-orphans
- $DOCKER_COMPOSE_CMD up -d
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test ./scripts/wait-for.sh database-test:54321 -t 60 -- echo "Database connection established"
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test php artisan passport:keys
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test php artisan migrate
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test sh -c "vendor/bin/phpunit ./tests $PARAMETERS --coverage-text --colors=never --stderr"
- $DOCKER_COMPOSE_CMD down --volumes --remove-orphans
# only:
# - tags
build:
stage: build
script:
- export IMAGE_TAG=$(echo "$CI_COMMIT_TAG" | awk -F '/' '{print $NF}')
- docker build -t laravel-api:"$IMAGE_TAG" .
- docker login -u "$CONTAINER_REGISTRY_USERNAME" -p "$CONTAINER_REGISTRY_PASSWORD" "$CONTAINER_REGISTRY_URL"
- docker push laravel-api:"$IMAGE_TAG"
only:
- tags
And this is the docker-compose-test.yml that seems to mess things up:
version: "3.7"
services:
laravel-api-test:
build:
args:
user: laravel
uid: 1000
context: .
dockerfile: docker/development/Dockerfile
working_dir: /var/www/
volumes:
- ./:/var/www
ports:
- ${APP_PORT}:9000
networks:
- application
database-test:
image: postgres:15.1-alpine
ports:
- 54321:5432
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
networks:
- application
networks:
application:
driver: bridge
The last thing that is probably relevant is the gitlab-runner config:
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-config
namespace: gitlab-runner
data:
config.toml: |-
concurrent = 4
[[runners]]
name = "Runner_1"
url = "https://gitlab.project.com/ci"
token = "my-token"
executor = "kubernetes"
[runners.kubernetes]
namespace = "gitlab-runner"
privileged = true
poll_timeout = 600
cpu_request = "1"
service_cpu_request = "200m"
[[runners.kubernetes.volumes.host_path]]
name = "docker"
mount_path = "/var/run/docker.sock"
host_path = "/var/run/docker.sock"
Finally this the output from the pipeline after it crashed:
Running with gitlab-runner 15.8.2 (4d1ca121)
on Runner_1 eNNz4y9k, system ID: r_y3jEhmF8fN58
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image docker:20.10.16 ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-ennz4y9k-project-117-concurrent-0f24cx to be running, status is Pending
Running on runner-ennz4y9k-project-117-concurrent-0f24cx via gitlab-runner-56cd6f4bb5-zrbd9...
Getting source from Git repository
00:01
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/Clients/opus-volvere/laravel-api/.git/
Created fresh repository.
Checking out 3890412c as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ docker-compose --version
Docker Compose version v2.6.0
$ $DOCKER_COMPOSE_CMD down --volumes --remove-orphans
Container laravel-api-database-test-1 Stopping
Container laravel-api-laravel-api-test-1 Stopping
Container laravel-api-database-test-1 Stopping
Container laravel-api-laravel-api-test-1 Stopping
Container laravel-api-database-test-1 Stopped
Container laravel-api-database-test-1 Removing
Container laravel-api-laravel-api-test-1 Stopped
Container laravel-api-laravel-api-test-1 Removing
Container laravel-api-laravel-api-test-1 Removed
Container laravel-api-database-test-1 Removed
Network laravel-api_application Removing
Network laravel-api_application Removed
$ $DOCKER_COMPOSE_CMD up -d
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 827B done
#1 DONE 0.1s
#2 [internal] load .dockerignore
#2 transferring context: 88B done
#2 DONE 0.1s
I am not really sure where to look, with log files or anything, so some help regarding the debugging of this issue is really appreciated...
As far as I can see, the only starts when I try to launch the docker compose. I already built the image in the pipeline and that worked like it should, but it start to go wrong when I actually try to run the containers. Maybe that helps? This is just a really annoying problem that isn't my real expertise or anything so I am reading, learning and trying a lot :(
I followed this tutorial on how to add a gitlab runner to kubernetes. Maybe is has something to do with the fact that it tries to create a new pod for the pipeline, because the tuotial I sent says:
The second is a ServiceAccount, Role, and RoleBinding to give the
Runner the privileges to add new Pods to the Namespace.
Again, I am not familiar with all this stuff, so for me its also a shot in the dark, but I really want this fixed so I can continue working on this project
What could cause this GitLab pipeline to crash my entire kubernetes?
Never expose the host container runtime to workload inside the cluster.
This can lead to the situation that the GitLab runner "cleans up" and removes the containers that operate your cluster components.
In addition to that you get tied to a specific container runtime which should be an implementation detail of your cluster.
As an alternative you can use docker-in-docker for the GitLab runner for example.
I'm trying to deploy docker compose to amazon ECS.
I have created this docker compose file:
services:
consul-server:
container_name: consul-server
hostname: consul-server
image: consul:1.12.2
command: agent -server -ui -node=server1 -bootstrap-expect=1 -client=0.0.0.0
ports:
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- xp_network
infra-service:
image: infra-service-docker-img:latest
build: .
container_name: infra-service
hostname: infra-service
ports:
- 5011:5011
networks:
- xp_network
networks:
dxp_network:
name: xp-vpc
I create an ECS context to target Amazon ECS using the following commands:
docker context create ecs ecscontext
I have AWS credentials set up in the local environment for authenticating with the ECS platform.(I did aws configure and add the keys)
and then I use an existing AWS profile. After I checked i created the new context (docker context ls) I ensured that i was using my context.
Run --> docker compose up
When I a do docker-compose up -d I can see that Container infra-service Started and Container consul-server Started.
When I check the state of the services I can not see any difference on the PORTS. No connection to aws and no resources created there also :S
enter image description here
basically I did:
$ aws configure
--> keys
--> region
$ docker compose build
$ docker context create ecs ecscontext
--> An existing AWS profile
$ docker context use ecscontext
$ docker compose up
$ docker compose ps
PLEASE!!!Anyone can tell me what I'm doing wrong? Do you think that it's something related to the credentials set up?
I'd like to run some integration tests against a real database, but I fail to start an additional container (for the db), because I need to mount a config file that is in my repo before it is starting up.
This is how I use the database on my local computer (docker-compose):
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-
I guess I cannot use a service container as the code is not available at the time the service container is started, therefore it won't pick up my configuration.
That's why I tried to run a container within my container using --network host (see below) and the container seems to be running fine, still I'm not able to curl it.
- name: Start DB for tests
run: |
docker run -d \
--network host \
-v ${{ github.workspace }}/dev/conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml \
-v ${{ github.workspace }}/dev/conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties \
tinkerpop/gremlin-server:3.5
- name: Test connection
run: |
curl "localhost:8182/gremlin?gremlin=g.V().valueMap()"
According to the documentation about the job context the id of the container network should be available ({{job.container.network}}) but is empty if you don’t use any job-level service or container.
Any ideas what I could try next?
This is what I ended up with: I'm now using docker-compose to run the integration tests (on my local computer as well as on GitHub Actions). I'm just mounting the entire directory/repo in the test container. Pulling the node:14-slim delays the build by some seconds, but I guess it's still the best option:
version: "3.2"
services:
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./data/:/opt/gremlin-server/data/
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties
- ./conf/initData.groovy:/opt/gremlin-server/scripts/initData.groovy
test:
image: node:14-slim
working_dir: /app
depends_on:
- gremlin-server
networks:
- graphdb_net
volumes:
- ../:/app
environment:
- NEPTUNE_CONNECTION_STRING=ws://gremlin-server:8182
command:
yarn test
networks:
graphdb_net:
driver: bridge
and I'm running them like this in my workflow:
- name: Spin up test environment
run: |
docker compose -f dev/docker-compose.yaml pull
docker compose -f dev/docker-compose.yaml build
- name: Run tests
run: |
docker compose -f dev/docker-compose.yaml run test
It's based on #DannyB's suggestion and his answer here so all props go to him.
I am trying to deploy a Flack app/service which is built into a docker container to Gitlab CI. I am able to get everything working via docker-compose except when I try to run tests against the postgres database I am getting the below error:
Is the server running on host "events_db" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
Presumably this is because the containers can't see each other. I've tried many different methods. But below is my latest. I have attempted to have docker-compose spin up both containers (just like it does on local), run the postgres db as a git lab service, run from a python image instead of a docker image, use a docker.prod.yml where I remove the volumes and variables.
Nothing is working. I've checked just about every link that shows up on google when you look for 'gitlab ci docker flask postgres' and I believe that I am massively misunderstanding the implementation.
I do have gitlab runner up and going.
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
- postgres:latest
stages:
- test
variables:
POSTGRES_DB: events_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
DATABASE_URL: postgres://postgres#postgres:5432/events_test
FLASK_ENV: development
APP_SETTINGS: app.config.TestingConfig
DOCKER_COMPOSE_VERSION: 1.23.2
before_script:
#- rm /usr/local/bin/docker-compose
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
#- mv docker-compose /usr/local/bin
- docker-compose up -d --build
test:
stage: test
#coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
script:
- docker-compose exec -T events python manage.py test
after_script:
- docker-compose down
docker-compose.yml
version: '3.3'
services:
events:
build:
context: ./services/events
dockerfile: Dockerfile
volumes:
- './services/events:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=app.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#events_db:5432/events_dev # new
- DATABASE_TEST_URL=postgres://postgres:postgres#events_db:5432/events_test # new
events_db:
build:
context: ./services/events/app/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
What is the executor type of your Gitlab Runner?
If you're using the Kubernetes executor, add this variable:
DOCKER_HOST: tcp://localhost:2375/
For non-Kubernetes executors, we use tcp://docker:2375/
DOCKER_HOST: tcp://docker:2375/
Also, the Gitlab Runner should be in "privileged" mode.
More info:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#help-and-feedback
Hope that helps!
I have following docker-compose.yaml for a private docker registry to be run on minikube:
version: '3'
services:
registry:
restart: always
image: registry:2
command: ["/bin/sh", "-ec","sleep 1000"]
ports:
- 443:443
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- /home/usr/registry/data:/var/lib/registry
- /home/usr/registry/certs:/certs
- /home/usr/registry/auth:/auth
When I do kompose up, the registry should be up and running. But doing docker login localhost:443 only gives me a connection refused error. If I run
docker run -d --restart=always --name registry -v `pwd`/auth:/auth -v `pwd`/certs:/certs -v `pwd`/certs:/certs -e REGISTRY_AUTH=htpasswd -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key -p 443:443 registry:2
everything works fine and I can log into my private registry.
The reason this is important is that I have a webapp-image that is in the private registry and should be pulled by kubernetes (minikube) from it. However, I always get a CrashLoopBackoff error which I deem due to the fact that the registry cannot be run from kubernetes and incidentally not be accessed by it. What am I getting wrong?
The solution is to set up a registry in minikube and port-forward to from localhost so the image gets pushed onto the minikube regsitry.