I was adding custom networks for my containers in my docker-compose file. But I came across this error when running docker-compose up
I just added:
networks:
- mumble
and that caused docker-compose up to make this error show up:
+] Running 1/2
⠿ Container murmur Started > 1.6s
⠿ Container botamusique Starting > 1.4s
Error response from daemon: failed to add interface veth60b7c6b to sandbox: error setting interface "veth60b7c6b" IP to 172.18.0.3/16: cannot program address 172.18.0.3/16 in sandbox interface because it conflicts with existing route {Ifindex: 38 Dst: 172.18.0.0/16 Src: 172.18.0.1 Gw: Flags: [] Table: 254}
This is pretty simple but the error message doesn't really make it easier.
This error will happen if you happen to have
network_mode: host
and
networks:
- somenetwork
at once in the same .yaml file, which naturally conflicts with eachother
Just remove network_mode: host and keep networks: and it'll work fine
Related
I'm getting error when configuration file is set.
My host is a Ubuntu 22.04
Inside the docker container the user is rabbitmq, using id -u rabbitmq the $UID is 999
I changed the file using: chown 999 advanced.config
But the same error still persists.
Failed to load advanced configuration file "/etc/rabbitmq/advanced.config": unknown POSIX error
Error during startup: {error,failed_to_read_advanced_configuration_file}
version: "3.2"
services:
rabbitmq2:
image: rabbitmq:3-management
hostname: rabbitmq2
container_name: 'rabbitmq2'
ports:
- "5672:5672"
- "15672:15672"
- "5552:5552"
volumes:
- ./advanced/rabbitmq2/advanced.config:/etc/rabbitmq/advanced.config
# or using:
# - type: bind
# source: $PWD/advanced/rabbitmq2/advanced.config
# target: /etc/rabbitmq/advanced.config
environment:
- RABBITMQ_ADVANCED_CONFIG_FILE=/etc/rabbitmq/advanced.config
If I use another place to put the file, or another file name, the container runs, but Rabbitmq doesn't load the configuration file.
I changed the content of the file and it didn't work (rabbitmq can't load the file), I tried using blank file, and using some configurations, for example:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
]
Be sure the format is correct:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
].
Note the trailing period.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The config server is reachable from localhost:8888 but when I deploy my applications on SCDF the following error occurs:
Fetching config from server at : http://localhost:8888
2021-07-30 14:58:53.535 INFO 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2021-07-30 14:58:53.535 WARN 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Could not locate PropertySource ([ConfigServerConfigDataResource#3de88f64 uris = array<String>['http://localhost:8888'], optional = true, profiles = list['default']]): I/O error on GET request for "http://localhost:8888/backend-service/default": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
The application(s) deploy successfully on SCDF apart from the config server connection. The only property I specify in SCDF is the docker network. I'm using spring.config.import and am not using any bootstraps. This all works correctly when deployed locally but the microservices can't connect to the config server when deployed on SCDF.
Spring Boot Version: 2.5.1
app properties
spring.application.name=backend-service
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.config.import=optional:configserver:http://localhost:8888
config server properties
spring.cloud.config.server.git.uri=...
management.endpoints.web.exposure.include=*
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.cloud.bus.id=my-config-server
spring.cloud.stream.rabbit.bindings.springCloudBus.consumer.declareExchange=false
spring.rabbitmq.host=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
spring.cloud.bus.enabled=true
spring.cloud.bus.refresh.enabled: true
spring.cloud.bus.env.enabled: true
server.port=8888
docker-compose.yml
version: '3.1'
services:
h2:
...
rabbitmq-container:
image: rabbitmq:3.7.14-management
hostname: dataflow-rabbitmq
expose:
- '5672'
ports:
- "5672:5672"
- "15672:15672"
networks:
- scdfnet
dataflow-server:
...
networks:
- scdfnet
app-import:
...
networks:
- scdfnet
skipper-server:
...
networks:
- scdfnet
configserver-container:
image: ...
ports:
- "8888:8888"
expose:
- '8888'
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
depends_on:
- rabbitmq-container
networks:
- scdfnet
networks:
scdfnet:
external:
name: scdfnet
volumes:
h2-data:
For anyone else having this problem, I have found two ways of solving it. The problem is that once the Spring Boot application is containerized, the localhost referred to in the properties file will cause the program to fetch the localhost of the application container's virtual network and not that of your local machine.
There are numerous Stack Overflow answers for this same error but all center around corrections to bootstrap properties. However, bootstrap context initialization is deprecated since Spring Boot 2.4.
The first solution is to use your IPv4 address instead of localhost.
spring.config.import=configserver:http://<insert IPv4 address>:8888
For Example:
spring.config.import=configserver:http://10.6.39.148:8888
A much better solution than hardwiring addresses is to reference the config server container running in docker compose:
spring.config.import=optional:configserver:http://configserver-container:8888
Make sure that all of the Docker Compose services are running on the same network (scdf_network in my case) and note that this address will only work when running on docker-compose so if you are building the maven file on Eclipse, you may need to remove or disable your tests to build successfully. That might be unnecessary; it could just be that there is some property that I failed to copy to my local application.properties file which is causing the context tests to fail. According to the documentation, the optional label should allow the config client to run even if contact cannot be established with the config server.
I am trying to setup an LXC container (debian) as a Kubernetes node.
I am so far that the only thing in the way is the kubeadm init script...
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.4.44-2-pve/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/5.4.44-2-pve\n", err: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
After some research I figured out that I probably need to add the following: linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
But adding this to /etc/pve/lxc/107.conf doesn't do anything.
Does anybody have a clue how to add the linux kernel modules?
To allow load with modprobe any modules inside privileged proxmox lxc container, you need add this options to container config:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: proc:rw sys:rw
lxc.mount.entry: /lib/modules lib/modules none bind 0 0
before that, you must first create the /lib/modules folder inside the container
I'm not sure what guide you are following but assuming that you have the required kernel modules on the host, this would do it:
lxc config set my-container linux.kernel_modules overlay
You can follow this guide from K3s too. Basically:
lxc config edit k3s-lxc
and
config:
linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
raw.lxc: lxc.mount.auto=proc:rw sys:rw
security.privileged: "true"
security.nesting: "true"
✌️
For the fix ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file run from the host:
pct set $VMID --mp0 /usr/lib/modules/$(uname -r),mp=/lib/modules/$(uname -r),ro=1,backup=0
For the fix [ERROR SystemVerification]: failed to parse kernel config run from the host:
pct push $VMID /boot/config-$(uname -r) /boot/config-$(uname -r)
Where $VMID is your container id.
Im running my docker container with
Docker run kalle:anka -p 8888:80 -p 5045:5045 -p 5672:5672
without any problem. But when I try to achieve the same thing with docker-compose I get the error
ERROR: for myapp.api_1 Cannot start service myapp.api: failed to
create endpoint myapp.api_1 on network nat: hnsCall failed in Win32:
The process cannot access the file because it is being used by another
process. (0x20)
The error above seems to have something to do with the port mapping. (When I remove those I get passed that problem.)
... I'll keep this question short, please ask if yo miss any details
Relevant part of my docker-compose.yaml
> version: '3.4'
>
> services: myapp.api:
> image: ${DOCKER_REGISTRY-}myappapi
> ports:
> - "8888:80"
> - "5045:5045"
> - "5672:5672"
I am attempting to port the Hyperledger Fabric Getting Started to Kubernetes. But am struggling to get peer1's to deploy. If I enable CORE_PEER_GOSSIP_BOOTSTRAP, I receive errors "Received AliveMessage from a peer with the same PKI-ID as myself".
How can I debug a peer reportedly having the same PKI-ID as another?
Using this as a starting point:
https://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
I am able to create:
orderer and cli pods in default namespace
peer0's one in each org1|org2 namespace.
peer1's but only if I disable (comment out) CORE_PEER_GOSSIP_BOOTSTRAP
If I enable CORE_PEER_GOSSIP_BOOTSTRAP for the peer1's, I receive the following warning and error:
[gossip/gossip#10.0.0.10:7051] NewGossipService -> WARN 01c External endpoint is empty, peer will not be accessible outside of its organization
...
[gossip/discovery#10.0.0.10:7051] handleAliveMessage -> ERRO 02a Bad configuration detected: Received AliveMessage from a peer with the same PKI-ID as myself: tag:EMPTY alive_msg:<membership:<pki_id:"[[REDACTED]]" > timestamp:<inc_number:1495468533769417608 seq_num:416 > >
In order to better map the Orderer, Peers to DNS names, I'm using Kubernetes Namespaces and this configuration:
OrdererOrgs:
- Name: Orderer
Domain: default.svc.cluster.local
Specs:
- Hostname: orderer
PeerOrgs:
- Name: Org1
Domain: org1.svc.cluster.local
Template:
Count: 2
Users:
Count: 2
- Name: Org2
Domain: org2.svc.cluster.local
Template:
Count: 2
Users:
Count: 2
In order to expose the peer0's to the other peers in the org and to expose the orderer, I have ClusterIP services for the peer0's (selecting only the peer0's) and orderer. It's inelegant but I'm trying to get it to work before I get it working more beautifully.
I am able to resolve orderer.default.svc.cluster.local, peer0.org1.svc.cluster.local, `peer0.org2.svc.cluster.local' using nslookup from within a pod deployed to default on the cluster.
Absent a curl-like tool for gPRC, I am able to open sockets against these endpoints on 7051 and 7053.
First, make sure you are using the right certificates.
Second, verify that your environment/configuration for gossip is set correctly
environment:
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:8051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
- CORE_PEER_GOSSIP_ENDPOINT=peer0.org1.example.com:7051
OR in core.yaml
peer:
gossip:
bootstrap: peer0.org1.example.com:7051
externalEndpoint: peer1.org1.example.com:8051
endpoint: peer0.org1.example.com:7051
Edited: Also make sure that you have properly setup your CA
Hope this helps, it worked for me. And I was successfully able to connect peers.
If the peers are started from the same node, its possible that you are mounting the same crypto-material (path to mspconfig directory) for both the peers. If that is the case, separate the directory structures for both the peers and keep their respective certificates in them, update the respective paths for msp in docker-compose file and try to run.