My VM is Ubuntu 18.04 and I install 2 instances of MySQL 5.7.
The first is installed in the localhost (port 3306), the second is installed via docker-compose (port 57306). The configurations cnf (which is from the default install of the localhost version) are all the same.
The Docker version only get 1700 Transactions vs the localhost version with 4100 Transactions.
My question is, why the docker version is a lot slower, the configs are all the same, what could be the problems?
I test the localhost (port 3306) using sysbench with this command:
# prepare
sysbench --table_size=1000000 --db-driver=mysql --mysql-db=sysbench --mysql-user=root --mysql-password=<pass> /usr/share/sysbench/oltp_read_only.lua prepare
# run
sysbench --table_size=1000000 --db-driver=mysql --mysql-db=sysbench --mysql-user=root --mysql-password=<pass> --time=60 --max-requests=0 --threads=8 /usr/share/sysbench/oltp_read_only.lua run
and I got 4100 Transactions
SQL statistics:
queries performed:
read: 3444602
write: 0
other: 492086
total: 3936688
transactions: 246043 (4100.43 per sec.)
queries: 3936688 (65606.81 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 60.0025s
total number of events: 246043
Latency (ms):
min: 0.88
avg: 1.95
max: 19.27
95th percentile: 3.13
sum: 479541.05
Threads fairness:
events (avg/stddev): 30755.3750/971.95
execution time (avg/stddev): 59.9426/0.00
Next I deploy the docker version using this docker-compose.yml file in port 57306
version: "3"
services:
mysql57:
build: ./bin/mysql57
container_name: 'mysql-5.7'
restart: 'unless-stopped'
ports:
- "57306:3306"
volumes:
- ./config/mysql57:/etc/mysql
- ./data/mysql57:/var/lib/mysql
- ./logs/mysql57:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: <pass>
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld"
The folder ./config/mysql57 is copied from the localhost configuration (/etc/mysql)
then I use this sysbench command
#prepare
sysbench --table_size=1000000 --db-driver=mysql --mysql-host=127.0.0.1 --mysql-port=57306 --mysql-db=sysbench --mysql-user=root --mysql-password=<pass> /usr/share/sysbench/oltp_read_only.lua prepare
#run
sysbench --table_size=1000000 --db-driver=mysql --mysql-host=127.0.0.1 --mysql-port=57306 --mysql-db=sysbench --mysql-user=root --mysql-password=<pass> --time=60 --max-requests=0 --threads=8 /usr/share/sysbench/oltp_read_only.lua run
I expect at least the transaction is near 4100 like above, but here it's only 1700
SQL statistics:
queries performed:
read: 1431402
write: 0
other: 204486
total: 1635888
transactions: 102243 (1703.87 per sec.)
queries: 1635888 (27261.87 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 60.0047s
total number of events: 102243
Latency (ms):
min: 3.01
avg: 4.69
max: 14.64
95th percentile: 5.99
sum: 479794.64
Threads fairness:
events (avg/stddev): 12780.3750/11.56
execution time (avg/stddev): 59.9743/0.00
Given the low read tps for both your tests, it feels like cpu count is low. how many cores u got?? do you get stuck at 100% cpu when you run the test ?? if so this is unrealistic and unfair benchmark.
you are also running the client on the same machine it appears, which is taking even more CPU. why would dockerized version perform worse ?? port forwarding could be a cause. better yet you could be connecting over unix domain sockets on the host w/- knowing it.
In terms of CPU usage:
Unix domain > TCP > TCP over docker NAT
run the container with host networking, something like:
docker run --rm -it --network host -e MYSQL_ROOT_PASSWORD=bob123 mysql:5.7 --port 5306
"--network host" says the container sees the same network as the host. no software emulated virtual network involved.
remove the CPU bottleneck and run the tests again.
Related
I'm trying to run metricbeat in a docker container to monitor a server's CPU/RAM usage and load on Kibana, but when I try to run the command sudo docker-compose up I get the following error:
metricbeat | 2021-07-28T05:02:22.033Z ERROR cfgfile/glob_watcher.go:66 Error getting stats for file: /usr/share/metricbeat/modules.d/system.yml
also Kibana doesn't seem to be able to monitor the info although the container's log in the terminal seems to be legit.
These configurations are running on other servers and they work just fine, but I can't seem to figure out the problem here. Also I have ran sudo chown -R 1000:1000 configs/ and sudo chmod -R go-w configs/ in my directory.
This is the system.yml file:
- module: system
metricsets:
- cpu # CPU usage
- load # CPU load averages
- memory # Memory usage
- network # Network IO
- process # Per process metrics
- process_summary # Process summary
- uptime # System Uptime
#- socket_summary # Socket summary
- core # Per CPU core usage
- diskio # Disk IO
- filesystem # File system usage for each mountpoint
- fsstat # File system summary metrics
#- raid # Raid
#- socket # Sockets and connection info (linux only)
#- service # systemd service information
enabled: true
period: 10s
processes: ['.*']
# Configure the mount point of the host’s filesystem for use in monitoring a host from within a container
system.hostfs: "/hostfs"
# Configure the metric types that are included by these metricsets.
cpu.metrics: ["percentages","normalized_percentages"] # The other available option is ticks.
core.metrics: ["percentages"] # The other available option is ticks.
And this is the docker-compose.yml:
services:
metricbeat:
image: ${METRICBEAT_IMAGE}
container_name: metricbeat
network_mode: host
environment:
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD}
volumes:
- ./configs/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro
- ./configs/modules.d:/usr/share/metricbeat/modules.d:ro
# system module
- /proc:/hostfs/proc:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /:/hostfs:ro
I appreciate any help as this has been bugging me for a while, Thanks in advance.
I have same error.
I found modules.d 'permission is
drw-r--r-x 2 root root 4096 Dec 2 15:17 modules.d
So I execute:
chmod g+X -R modules.d
and restart filebeat .Bingo
I have PostgreSQL running in a Docker container (Docker 17.09.0-ce-mac35 on OS X 10.11.6) and I'm inserting data from a Python application on the host. After a while I consistently get the following error in Python while there is still plenty of disk space available on the host:
psycopg2.OperationalError: could not extend file "base/16385/24599.49": wrote only 4096 of 8192 bytes at block 6543502
HINT: Check free disk space.
This is my docker-compose.yml:
version: "2"
services:
rabbitmq:
container_name: rabbitmq
build: ../messaging/
ports:
- "4369:4369"
- "5672:5672"
- "25672:25672"
- "15672:15672"
- "5671:5671"
database:
container_name: database
build: ../database/
ports:
- "5432:5432"
The database Dockerfile looks like this:
FROM ubuntu:17.04
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ zesty-pgdg main" > /etc/apt/sources.list.d/pgdg.list
RUN apt-get update && apt-get install -y --allow-unauthenticated python-software-properties software-properties-common postgresql-10 postgresql-client-10 postgresql-contrib-10
USER postgres
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER ****** WITH SUPERUSER PASSWORD '******';" &&\
createdb -O ****** ******
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/10/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/10/main/postgresql.conf
EXPOSE 5432
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
CMD ["/usr/lib/postgresql/10/bin/postgres", "-D", "/var/lib/postgresql/10/main", "-c", "config_file=/etc/postgresql/10/main/postgresql.conf"]
df -k output:
Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk2 1088358016 414085004 674017012 39% 103585249 168504253 38% /
devfs 190 190 0 100% 658 0 100% /dev
map -hosts 0 0 0 100% 0 0 100% /net
map auto_home 0 0 0 100% 0 0 100% /home
Update 1:
It seems like the container has now shut down. I'll start over and try to df -k in the container before it shuts down.
2017-11-14 14:48:25.117 UTC [18] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2017-11-14 14:48:25.120 UTC [17] WARNING: terminating connection because of crash of another server process
2017-11-14 14:48:25.120 UTC [17] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2017-11-14 14:48:25.120 UTC [17] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2017-11-14 14:48:25.132 UTC [1] LOG: all server processes terminated; reinitializing
2017-11-14 14:48:25.175 UTC [1] FATAL: could not access status of transaction 0
2017-11-14 14:48:25.175 UTC [1] DETAIL: Could not write to file "pg_notify/0000" at offset 0: No space left on device.
2017-11-14 14:48:25.181 UTC [1] LOG: database system is shut down
Update 2:
This is df -k on the container, /dev/vda2 seems to be filling up quickly:
$ docker exec -it database df -k
Filesystem 1K-blocks Used Available Use% Mounted on
none 61890340 15022448 43700968 26% /
tmpfs 65536 0 65536 0% /dev
tmpfs 1023516 0 1023516 0% /sys/fs/cgroup
/dev/vda2 61890340 15022448 43700968 26% /etc/postgresql
shm 65536 8 65528 1% /dev/shm
tmpfs 1023516 0 1023516 0% /sys/firmware
Update 3:
This seems to be related to the ~64 GB file size limit on Docker.qcow2. Solved using qemu and gparted as follows:
cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
qemu-img info Docker.qcow2
qemu-img resize Docker.qcow2 +200G
qemu-img info Docker.qcow2
qemu-system-x86_64 -drive file=Docker.qcow2 -m 512 -cdrom ~/Downloads/gparted-live-0.30.0-1-i686.iso -boot d -device usb-mouse -usb
I have this Docker command:
docker run -d mongo
this will build and run a mongodb server running in a docker container
However, I get an error:
no space left on device
I am on MacOS, and using the newer versions of Docker which use hyper-v instead of VirtualBox (I think that's correct).
Here is the exact error message from the mongo container:
$ docker logs efee16702c5756659d563b98d4ae0f58ecf1f1bba8a54f63443c0ae4b520ab4e
about to fork child process, waiting until server is ready for connections.
forked process: 21
2017-05-04T20:23:51.412+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2017-05-04T20:23:51.430+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/tmp.Lo035QkbfL: No space left on device
ERROR: child process failed, exited with error number 1
Any idea how to fix this and prevent it from happening in future?
As suggested, the output of df -h is:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 116Gi 349Gi 25% 1963838 4293003441 0% /
devfs 183Ki 183Ki 0Bi 100% 634 0 100% /dev
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
Output of docker info is:
$ docker info
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 741
Server Version: 17.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.13-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952 GiB
Name: moby
ID: OR4L:WYWW:FFAP:IDX3:B6UK:O2AN:UVTO:EPH6:GYSV:4GV4:L5WP:BQTH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 17
Goroutines: 30
System Time: 2017-05-04T20:45:27.056157913Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
As you state in the comments to the question, ls -altrh ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 returns the following:
-rw-r--r--# 1 alexamil staff 53G
This is a known bug on MacOS (actually, not only) and an official dev comment could be found here. Except for one thing: I read, that different people get different size limit. In the comment it is 64Gb, but for another person it was 20Gb.
There are a couple walkarounds, but no definite solution that I could find.
The manual one
Run docker ps -a and manually remove all unused containers. Then run docker images and remove manually all the intermediate and unused images.
The simplest one
Delete the Docker.qcow2 file entirely. But you will lose all images and containers. Completely.
The less simple
Another way is to run docker volume prune, which will remove all unused volumes
The resizing one (keeps the data)
Another idea that comes to me is to expand the disk image size with QEMU or something like it:
$ brew install qemu
$ /Applications/Docker.app/Contents/MacOS/qemu-img resize ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 +5G
After you expanded the image, you will need to run a VM in which you should run GParted against Docker.qcow2 and expand the partition to use added space. You could use GParted Live ISO for that:
$ qemu-system-x86_64 -drive file=Docker.qcow2 -m 512 -cdrom ~/Downloads/gparted-live.iso -boot d -device usb-mouse -usb
Some people report this either doesn't work or doesn't help.
Yet another resizing one (wipes the data)
Create a substitute image with desired size (120G):
$ qemu-img create -f qcow2 ~/data.qcow2 120G
$ cp ~/data.qcow2 /Application/Docker.app/Contents/Resources/moby/data.qcow2
$ rm ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
data.qcow2 is copied to ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 when you restart docker.
This walkaround comes from this comment.
Hope this helps. Good luck!
I'm trying to get SonarQube stood up and scanning applications via Docker containers on an EC2 instance. I've spent the past day poring over SonarQube and Postgres documentation and am having very little luck.
The most sensible guide I've found is the docker-sonarqube project maintained by SonarSource. More specifically, I am following the SonarQube/Postgres guide using docker-compose.
My docker-compose.yml file looks identical to the one provided by SonarSource:
sonarqube:
build: "5.2"
ports:
- "9000:9000"
links:
- db
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
volumes_from:
- plugins
db:
image: postgres
volumes_from:
- datadb
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
datadb:
image: postgres
volumes:
- /var/lib/postgresql
command: /bin/true
plugins:
build: "5.2"
volumes:
- /opt/sonarqube/extensions
- /opt/sonarqube/lib/bundled-plugins
command: /bin/true
docker ps -a yields:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d003aef18f2 dockersonarqube_sonarqube "./bin/run.sh" 47 seconds ago Up 46 seconds 0.0.0.0:9000->9000/tcp dockersonarqube_sonarqube_1
c7d5043f4381 dockersonarqube_plugins "./bin/run.sh /bin/tr" 48 seconds ago Exited (0) 46 seconds ago dockersonarqube_plugins_1
590c72b4a723 postgres "/docker-entrypoint.s" 48 seconds ago Up 47 seconds 5432/tcp dockersonarqube_db_1
c105e6aebe09 postgres "/docker-entrypoint.s" 49 seconds ago Exited (0) 48 seconds ago dockersonarqube_datadb_1
Latest output from the sonarqube_1 container is:
sonarqube_1 | 2016.01.20 17:49:09 INFO web[o.s.s.a.TomcatAccessLog] Web server is started
sonarqube_1 | 2016.01.20 17:49:09 INFO web[o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
sonarqube_1 | 2016.01.20 17:49:09 INFO app[o.s.p.m.Monitor] Process[web] is up
What does concern me is the latest output from the db_1 container:
PostgreSQL init process complete; ready for start up.
LOG: database system was shut down at 2016-01-20 17:48:40 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
ERROR: relation "schema_migrations" does not exist at character 21
STATEMENT: select version from schema_migrations
ERROR: relation "schema_migrations" does not exist at character 21
STATEMENT: select version from schema_migrations
ERROR: relation "schema_migrations" does not exist at character 21
STATEMENT: select version from schema_migrations
ERROR: relation "schema_info" does not exist at character 15
STATEMENT: SELECT * FROM "schema_info" LIMIT 1
Navigating to http://my.instance.ip:9000 is unsuccessful. I am able to hit the respective ports of other running containers from the same machine.
Could anyone help to point me in the right direction? Any other guides or documentation that may serve me better? I also see issues with the documentation stating that analyzing a project begins with mvn sonar:sonar, but I'll defer that for now. Thank you very much in advance!
Use this image
I modified this image to talk to a RDS instance.
EC2(docker-sonar)<==> RDS postgres
I am trying to debug some shared memory issues with Postgres 9.3.1 and CentOS release 6.3 (Final). Using top, I can see that many of the postgres connections are using shared memory:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3534 postgres 20 0 2330m 1.4g 1.1g S 0.0 20.4 1:06.99 postgres: deploy mtalcott 10.222.154.172(53495) idle
9143 postgres 20 0 2221m 1.1g 983m S 0.0 16.9 0:14.75 postgres: deploy mtalcott 10.222.154.167(35811) idle
6026 postgres 20 0 2341m 1.1g 864m S 0.0 16.4 0:46.56 postgres: deploy mtalcott 10.222.154.167(37110) idle
18538 postgres 20 0 2327m 1.1g 865m S 0.0 16.1 2:06.59 postgres: deploy mtalcott 10.222.154.172(47796) idle
1575 postgres 20 0 2358m 1.1g 858m S 0.0 15.9 1:41.76 postgres: deploy mtalcott 10.222.154.172(52560) idle
...
There are about 29 total idle connections. However, sudo ipcs -m only shows:
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x0052e2c1 163840 postgres 600 48 21
Surprisingly, it only shows it using 48 bytes. Why doesn't ipcs show a larger segment? Is there a different command I should be using?
I think it is because your postgre is of version 9.3, which uses POSIX type of shared memory. And ipcs -m shows sysV shared memory segments, which were used in Postgre of prior versions.