I recently started project involving Orion Context Broker. Tried to started it on windows using docker.
https://hub.docker.com/r/fiware/orion/
First method didn't come out good, error that I got while using original code from tutorial is:
ERROR: yaml.parser.ParserError: while parsing a block mapping
in ".\docker-compose.yml", line 1, column 1
expected <block end>, but found '<block mapping start>'
in ".\docker-compose.yml", line 5, column 2
Then I decided to move on to second method, I started mongoDB with default parameters. I got it to listening for connections and used the 2A way from docker site.
sudo docker run -d --name orion1 -p 1026:1026 fiware/orion
It seems to have started because it has not returned any errors while starting. However if I use:
curl localhost:1026/version
I receive no response whatsoever it just freeze and in mongoDB console I don't have any new connection. The addr of docker container is right. Firewall is off. It seems like it haven't connected, but it's running. If I want to start again orion context broker it tells me that it's already running, so then I stop it, remove orion1 and can start it again. When I connect to running mongoDB from another console it shows a new connection while when connecting with Context Broker there isn't.
When I checked CB logs I got
time=Tuesday 24 Oct 21:37:32 2017.378Z | lvl=ERROR | corr=N/A
trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion |
op=mongoConnectionPool.cpp[152]:mongoConnect |
msg=Database Startup Error (cannot connect to mongo - doing 100 retries with a 1000 microsecond interval)
With regards to docker-compose.yml fails, copy paste sometimes is tricky... I'll recommend you to download directly from github repository. The following should work:
wget https://raw.githubusercontent.com/telefonicaid/fiware-orion/master/docker/docker-compose.yml
With regards to Orion docker failing to connect the database, have a look to section 2B in the docker documentation:
sudo docker run -d --name orion1 --link mongodb:mongodb -p 1026:1026 fiware/orion -dbhost mongodb
It seems you are missing the --link mongodb:mongodb parameter (which requires previously run a MongoDB docker named mongodb, of course).
Related
I'm monitoring a postgresql db process with telegraf and procstat input plugin, but it's not seems to be working as expected in all machines(master and slaves).
The input plugin configuration is like below:
[[inputs.procstat]]
systemd_unit = "postgresql-10.service"
For this process where it's running and telegraf says running=0i
I can connect to the DB without problem using psql!
In another machine where telegraf says running=1i:
The only difference I can see is the absence of MainPID.
It seems that telegraf use this cmd systemctl show postgresql-10.service | grep MainPID to find the pid of process and get 0 when it says running=0i.
systemctl show postgresql-10.service | grep MainPID
GuessMainPID=yes
MainPID=0
ExecMainPID=0
The databases are installed with ansible playbook so there's no difference on version of postgres or the system unit file configuration.
I found this issue but couldn't figure out what to exactly do to fix it.
What is the issue and How to fix it?
Thanks in advance.
Update:
systemctl list-unit-files | grep enabled | grep postgres
postgres_exporter.service enabled
systemctl list-unit-files | grep postgres
postgres_exporter.service enabled
postgresql-10.service disabled
I'm trying to run my Spring boot project as a dockerised container alongside a MongoDb container both within the same Docker network.
I get the following error
INFO 1 --- [localhost:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server localhost:27017
I've already set up the Docker network successfully, and created+run the mongodb container:
docker run --name=mongotestv1 --rm --network=testnetwork mongotestv1
So my understanding is, this pull the mongodb image off of the Docker registry and creates a container that now runs in the Docker network testnetwork.
This seems to work, where docker ps shows the container indeed running.
So next I moved to start up my Spring Boot project app itself in the same Docker network:
docker run --name=notedemo --rm --network= testnetwork -p 8089:8089 -e MONGO_URL=mongodb://mongotestv1:27017/dev notedemo
This ran fine (ASIDE from the connection errors that popped up alongside the normal Spring app runtime logs).
Docker ps now shows both mongotestv1 and demo running.
So I thought that since the mongo container is unable to connect to localhost:27017, maybe its due to my other locally brew-installed mongodb community that's already stopped?
So I ran ps -ef | grep mongod | grep -v grep | wc -l | tr -d ' ', ps -ef | grep mongodb-community | grep -v grep | wc -l | tr -d ' ' and brew services list. both showed that the community version is stopped already with zero matching processe.
Running ps -ef | grep mongo | grep -v grep | wc -l | tr -d ' ' showed one process, which I assume is the mongo container.
I've also disabled my antivirus to ensure the firewall wasn't blocking connections.
So localhost:27017 is definitely not used, why this error?
This is my application.properties:
spring.data.mongodb.uri=mongodb://localhost:27017/dev
server.port=8089
Really stuck here. Appreciate any help!
Well, as the title suggests, this is more of an issue record. I was trying to follow the instructions on this README file of Keycloak docker server images, but encountered a few blockers.
After pulling the image, below command to start a standalone instance failed.
docker run jboss/keycloak
The error stack trace:
-b 0.0.0.0
=========================================================================
Using PostgreSQL database
=========================================================================
...
04:45:06,084 INFO [io.smallrye.metrics] (MSC service thread 1-5) Converted [2] config entries and added [4] replacements
04:45:06,096 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 33) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "datasources"),
("data-source" => "KeycloakDS")
]) - failure description: "WFLYCTL0113: '' is an invalid value for parameter user-name. Values must have a minimum length of 1 characters"
...
Caused by: java.lang.RuntimeException: Failed to connect to database
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:382)
...
Caused by: javax.naming.NameNotFoundException: datasources/KeycloakDS -- service jboss.naming.context.java.jboss.datasources.KeycloakDS
at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:106)
...
I was wondering how it uses a PostgreSQL database, and assumed it might spin up its own instance. But the error looks like it has a problem connecting to the database.
Changing to the embedded H2 DB made it work.
docker run -e DB_VENDOR="h2" --name docker-keycloak-h2 jboss/keycloak
The docker-entrypoint.sh file shows that it uses below logic to determine what DB to use.
if (getent hosts postgres &>/dev/null); then
export DB_VENDOR="postgres"
...
And further down the flow, this change-database.cli file indicates that it's actually expecting a running PostgreSQL instance to use.
connection-url=jdbc:postgresql://${env.DB_ADDR:postgres}:${env.DB_PORT:5432}/${env.DB_DATABASE:keycloak}${env.JDBC_PARAMS:}
So I began wondering how PostgreSQL was chosen as a default initially. Executing below commands in a running Keycloak docker container revealed some interesting things.
[root#71961b81189c bin]# getent hosts postgres
69.172.201.153 postgres.mbox.com
[root#71961b81189c bin]# echo $?
0
Not sure what this postgres.mbox.com is but apparently it's not an expected PostgreSQL server to be resolved by getent. Not sure whether this is a recent linux issue either. The hosts entry in the Name Service Switch Configuration file /etc/nsswitch.conf looks like below inside the container.
hosts: files dns myhostname
It is the dns data source that resolved postgres to postgres.mbox.com.
This is why the DB vendor determination logic failed which eventually caused the container failing to start. The instructions on this README file do not work as of the day this post is published.
Below are the working commands to start a Keycloak server in docker properly with PostgreSQL as the database.
docker network create keycloak-network
docker run -d --name postgres --net keycloak-network -e POSTGRES_DB=keycloak -e POSTGRES_USER=keycloak -e POSTGRES_PASSWORD=password postgres
docker run --name docker-keycloak-postgres --net keycloak-network -e DB_USER=keycloak -e DB_PASSWORD=password jboss/keycloak
I ran into the same issue. As it turned out, the key to the solution was the missing parameter "DB_USER=keycloak".
The Application tried to authenticate against the database using the username ''. This was indicated by the first error message.
WFLYCTL0113: '' is an invalid value for parameter user-name
Possibly the 4.x and 5.0.0 versions set the default user name to "keycloak" which was no longer the case in 6.0.0.
After adding the parameter DB_USER=keycloak to the list of environment variables, keycloak started up without any problems.
The problem no longer occurs now. I am voting to close the question.
I've also had an interesting observation to this issue, even in version 7.0.0. Alike the author mentions, postgres is selected if the host can resolve.
$ getent hosts postgres
$ 92.242.140.21
What I've noticed is that if I issue a ping command at anything bizzare, even foobar, it evaluates to that same ip address. Example:
$ ping foobar
$ PING foobar (92.242.140.21): 56 data bytes
It seems that my ISP sends everything to a common endspace. I've solved the problem by using the -e DB_VENDOR=h2, to select the h2 db, and then had no issues. Alternatively, you can always spin up your own postgres version, or point to a legitimate endpoint. ( Not something fake provided by your ISP for DNS error handling )
I'm trying to implement centralised logging for a number of micro-service docker containers.
To achieve this I'm attempting to use the recommended syslog logging driver approach, to deliver logs to loggly.
https://www.loggly.com/docs/docker-logging-driver/
I've done the following...
On the remote docker-machine...
$ curl -O https://www.loggly.com/install/configure-linux.sh
$ sudo bash configure-linux.sh -a SUBDOMAIN -u USERNAME
It verified that everything worked correctly, and I can see that the host events are now going through to the loggly console.
I then configured the services in docker-compose, like so...
nginx_proxy:
build: nginx_proxy
logging:
driver: "syslog"
options:
tag: "{{.ImageName}}/{{.Name}}/{{.ID}}"
I then rebuilt and re-launched the containers, with...
$ docker-compose up --build -d
However I'm not getting any logs from the containers going to loggly.
I can verify that the syslog driver update has taken effect by doing...
$ docker-compose logs nginx_proxy
This reports...
nginx_proxy_1 | WARNING: no logs are available with the 'syslog' log driver
Which is what I would expect to see, as this log driver doesn't work for viewing logs locally.
Is there something else I need to do to get this working correctly?
Can you share Dockerfile in nginx_proxy directory? Did you confirm that it is generating logs?
To test, can you swap out nginx with basic ubuntu that echo's something like they show in loggly documentation: https://www.loggly.com/docs/docker-logging-driver/
Run:
sudo docker run -d --log-driver=syslog --log-opt tag="{{.ImageName}}\{{.Name}}\{{.ID}}" ubuntu echo "Test Log"
Check:
$ tail /var/log/syslog
One of the annoying things about running Mongodb with docker-compose is if you stop your docker-machine without first stopping the compose containers (eg if you're running a local dev copy and reboot your laptop...) then Mongo gets in a bad state.
When you try and start mongodb again it fails:
$ docker-compose logs mongodb
Attaching to dockerenvironment_mongodb_1
mongodb_1 | about to fork child process, waiting until server is ready for connections.
mongodb_1 | forked process: 12
mongodb_1 | ERROR: child process failed, exited with error number 100
The general advice for this seems to be 'delete the lock file'
However I can't do that because the container has stopped already, I can't exec into it.
If I do (i.e. start a fresh container)
$ docker-compose run mongodb
root#a65901f9fc3d:/# ls /data/db
root#a65901f9fc3d:/#
...there's no lock file to delete.
I have tried also
$ docker rm -v dockerenvironment_mongodb_1
but when I start it fails again with Exit 100
I don't know what else to try, can anyone help?