Is there a proxy with Couchbase memcached bucket? - memcached

We are using Couchbase 5.1.1, a cluster of 5 VMs, memcached bucket. I try to understand how memcache bucket work within a couchbase cluster.
Php is speaking memcache directly to couchbase servers (no explicit proxy).
But sometime, I see error: SERVER_ERROR proxy downstream timeout looks like there is a proxy somewhere?
Test with Docker:
docker run --name cb --rm -ti couchbase:5.1.1
Then inside the container:
couchbase-cli cluster-init --cluster localhost --cluster-username admin --cluster-password totototo --cluster-name poc
couchbase-cli bucket-create --username admin --password totototo --cluster localhost --bucket mem --bucket-type memcached --bucket-ramsize 128 --bucket-port 11212
I can see a moxi process now:
> /opt/couchbase/bin/moxi -B auto -z
> url=http://127.0.0.1:8091/pools/default/bucketsStreaming/mem -Z
> port_listen=11212,downstream_max=1024,downstream_conn_max=4,connect_max_errors=5,connect_retry_interval=30000,connect_timeout=400,auth_timeout=100,cycle=200,downstream_conn_queue_timeout=200,downstream_timeout=5000,wait_queue_timeout=200 -p 0 -Y y -O stderr

Actually yes, before Couchbase 5.5.X:
a moxi process is run as soon as a memcached bucket is created.
After and at 5.5.X:
no more moxi proxy, you can run moxi or configure application to distribute data over the cluster

Related

Connect to remote kafka from command within docker container

We tried to connect to kafka using below command however we are not able to reach the broker. Could anyone help us here.
sudo docker run --rm --name=jaeger1 -e SPAN_STORAGE_TYPE=kafka -p 14267:14267/tcp -p 14268:14268/tcp -p 9411:9411/tcp jaegertracing/jaeger-collector --kafka.producer.topic="test Span" --kafka.producer.brokers=<broker>:9092 --kafka.producer.plaintext.password="" --kafka.producer.plaintext.username="<username>"
{"level":"fatal","ts":1585232844.0954845,"caller":"collector/main.go:70","msg":"Failed to init storage factory","error":"kafka: client has run out of available brokers to talk to (Is your cluster reachable?)"
Please let us know what we are missing.

Creating 2 S3 buckets in LocalStack via a docker-compose file

Currently we’re creating a localstack container using a docker-compose file, specifically for the purpose of using the S3 service.
We’ve added this line to the environment which creates an S3 bucket
- AMAZONPROPERTIES.BUCKETNAME=bucketname
We’ve then created any additional buckets needed using a utility within our Java code.
However, it would be preferable to create all buckets needed automatically at the outset using our docker-compose file. Is it possible to do this?
Not sure if it's the best way, but it works.
We're now running the docker-compose.yml in a bash script, waiting a short while to ensure that the service is running and then call a curl command within the docker container to create another S3 bucket.
#!/bin/bash
docker-compose -f docker-compose.yml up -d --build
echo "Waiting for Services to be ready"
sleep 20
docker exec -it general-files_general-files_1 curl -X POST
https://localhost:7777/createBucket -F bucketName=bucketname2 --insecure
echo
echo "S3 buckets available are: "
docker exec -it general-files_general-files_1 curl -X GET
https://localhost:7777/listBuckets --insecure
echo
echo "Services are ready for use"

docker-compose syslog driver to loggly not working

I'm trying to implement centralised logging for a number of micro-service docker containers.
To achieve this I'm attempting to use the recommended syslog logging driver approach, to deliver logs to loggly.
https://www.loggly.com/docs/docker-logging-driver/
I've done the following...
On the remote docker-machine...
$ curl -O https://www.loggly.com/install/configure-linux.sh
$ sudo bash configure-linux.sh -a SUBDOMAIN -u USERNAME
It verified that everything worked correctly, and I can see that the host events are now going through to the loggly console.
I then configured the services in docker-compose, like so...
nginx_proxy:
build: nginx_proxy
logging:
driver: "syslog"
options:
tag: "{{.ImageName}}/{{.Name}}/{{.ID}}"
I then rebuilt and re-launched the containers, with...
$ docker-compose up --build -d
However I'm not getting any logs from the containers going to loggly.
I can verify that the syslog driver update has taken effect by doing...
$ docker-compose logs nginx_proxy
This reports...
nginx_proxy_1 | WARNING: no logs are available with the 'syslog' log driver
Which is what I would expect to see, as this log driver doesn't work for viewing logs locally.
Is there something else I need to do to get this working correctly?
Can you share Dockerfile in nginx_proxy directory? Did you confirm that it is generating logs?
To test, can you swap out nginx with basic ubuntu that echo's something like they show in loggly documentation: https://www.loggly.com/docs/docker-logging-driver/
Run:
sudo docker run -d --log-driver=syslog --log-opt tag="{{.ImageName}}\{{.Name}}\{{.ID}}" ubuntu echo "Test Log"
Check:
$ tail /var/log/syslog

How can I use REST API to interact with the Docker engine?

We can use the command docker images to list the Docker images we have on local host.
Now I want to get the same information from a remote server by sending an HTTP GET request in Firefox or Chrome. Does Docker provide some REST API to do this?
I did a lot of search. For example:
Examples using the Docker Engine SDKs and Docker API
It provides a way something like this:
curl --unix-socket /var/run/docker.sock http:/v1.24/containers/json
I know a little about Unix sockets, and I don't think this is what I want. The URL (http:/v1.24/containers/json) is so weird and don't even have a server name in it. I don't think it can work on a remote server. (It does work on a local server.)
Is there any official documentation that Docker provides on this topic?
You need to expose the Docker daemon on a port.
You can configure the Docker daemon to listen to multiple sockets at the same time using multiple -H options:
listen using the default Unix socket, and on two specific IP addresses on this host.
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2
The Docker client will honor the DOCKER_HOST environment variable to set the -H flag for the client. Use one of the following commands:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
You need to do this by creating a systemd dropin:
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/10_docker.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2376
EOF
Then reload and restart Docker:
systemctl daemon-reload
systemctl restart docker
Note: this way you would be exposing your host and you shouldn't do it this way in production. Please read more about this on the link I shared earlier.

Docker container not starting when connect to postgresql external

I have docker container with redmine in it and I have postgresql-95 running on my host machine, and I want to connect my redmine container to my postgresql. I followed this step : https://github.com/sameersbn/docker-redmine, to connect my container with external postgresql.
assuming my host machine with ip 192.168.100.6, so I ran this command :
docker run --name redmine -it -d \
--publish=10083:80 \
--env='REDMINE_PORT=10083' \
--env='DB_ADAPTER=postgresql' \
--env='DB_HOST=192.168.100.6' \
--env='DB_NAME=redmine_production' \
--env='DB_USER=redmine' --env='DB_PASS=password' \
redmine-docker
The container running for 1 minute but suddenly it stopped even before it runs nginx and redmine in it. I Need help about this configuration. Thank you.