For some reason, I cannot get HandlerSocket to start listening when I start mariadb (version
10.0.14). I am using Cent OS 6.5.
my.cnf has the following settings:
handlersocket_port = 9998
handlersocket_port_wr = 9999
handlersocket_address = 127.0.0.1
Calling "SHOW GLOBAL VARIABLES LIKE 'handlersocket%'" from the mariaDb prompt shows:
+-------------------------------+-----------+
| Variable_name | Value |
+-------------------------------+-----------+
| handlersocket_accept_balance | 0 |
| handlersocket_address | 127.0.0.1 |
| handlersocket_backlog | 32768 |
| handlersocket_epoll | 1 |
| handlersocket_plain_secret | |
| handlersocket_plain_secret_wr | |
| handlersocket_port | 9998 |
| handlersocket_port_wr | 9999 |
| handlersocket_rcvbuf | 0 |
| handlersocket_readsize | 0 |
| handlersocket_sndbuf | 0 |
| handlersocket_threads | 16 |
| handlersocket_threads_wr | 1 |
| handlersocket_timeout | 300 |
| handlersocket_verbose | 10 |
| handlersocket_wrlock_timeout | 12 |
+-------------------------------+-----------+
I can start mariadb successfully, but when I check to see which ports are actively listening,
neither 9998 nor 9999 show up. I've checked the mysqld.log file, but no errors seem to be occurring.
Answering my own question here -
SELINUX needed to be set to permissive mode to get HandlerSocket started.
Related
We have had an incredibly long running autovacuum process running on one of our smaller database machines that we believe has been using a lot of Aurora:StorageIOUsage:
We determined this by running SELECT * FROM pg_stat_activity WHERE wait_event_type = 'IO';
and seeing the below results repeatedly.
datid | datname | pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | wait_event_type | wait_event | state | backend_xid | backend_xmin | query | backend_type
--------+----------------------------+-------+----------+-----------+------------------+----------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+--------------+--------+-------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------
398954 | postgres | 17582 | | | | | | | 2022-09-29 18:45:55.364654+00 | 2022-09-29 18:46:20.253107+00 | 2022-09-29 18:46:20.253107+00 | 2022-09-29 18:46:20.253108+00 | IO | DataFileRead | active | | 66020718 | autovacuum: VACUUM pg_catalog.pg_depend | autovacuum worker
398954 | postgres | 17846 | | | | | | | 2022-09-29 18:46:04.092536+00 | 2022-09-29 18:46:29.196309+00 | 2022-09-29 18:46:29.196309+00 | 2022-09-29 18:46:29.19631+00 | IO | DataFileRead | active | | 66020732 | autovacuum: VACUUM pg_toast.pg_toast_2618 | autovacuum worker
As you can see from the screenshot it has been going for well over a month, and is mainly for the pg_depend, pg_attribute, and pg_toast_2618 tables which are not all that large. I haven't been able to find any reason why these tables would need so much vacuuming other than maybe a database restore from our production environment (this is one of our lower environments). Here are the pg_stat_sys_tables entries for those tables and the pg_rewrite which is the table that pg_toast_2618 is associated with:
relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | vacuum_count | autovacuum_count | analyze_count | autoanalyze_count
-------+------------+---------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+-------------+-------------------------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------
1249 | pg_catalog | pg_attribute | 185251 | 12594432 | 31892996 | 119366792 | 1102817 | 3792 | 1065737 | 1281 | 543392 | 1069529 | 23584 | | 2022-09-29 18:53:25.227334+00 | | 2022-09-28 01:12:47.628499+00 | 0 | 1266763 | 0 | 36
2608 | pg_catalog | pg_depend | 2429 | 369003445 | 14152628 | 23494712 | 7226948 | 0 | 7176855 | 0 | 476267 | 7176855 | 0 | | 2022-09-29 18:52:34.523257+00 | | 2022-09-28 02:02:52.232822+00 | 0 | 950137 | 0 | 71
2618 | pg_catalog | pg_rewrite | 25 | 155083 | 1785288 | 1569100 | 64127 | 314543 | 62472 | 59970 | 7086 | 377015 | 13869 | | 2022-09-29 18:53:11.288732+00 | | 2022-09-23 18:54:50.771969+00 | 0 | 1280018 | 0 | 81
2838 | pg_toast | pg_toast_2618 | 0 | 0 | 1413436 | 3954640 | 828571 | 0 | 825143 | 0 | 15528 | 825143 | 1653714 | | 2022-09-29 18:52:47.242386+00 | | | 0 | 608881 | 0 | 0
I'm pretty new to Postgres and I'm wondering what could possibly cause this level of records to need to be cleaned up, and why it would take well over a month to accomplish considering we always have autovacuum set to TRUE. We are running Postgres version 10.17 on a single db.t3.medium, and the only thing I can think of at this point is to try increasing the instance size. Do we simply need to increase our database instance size on our aurora cluster so that this can be done more in memory? I'm at a bit of a loss for how to reduce this huge sustained spike in Storage IO costs.
Additional information for our autovaccum settings:
=> SELECT * FROM pg_catalog.pg_settings WHERE name LIKE '%autovacuum%';
name | setting | unit | category | short_desc | extra_desc | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | sourcefile | sourceline | pending_restart
-------------------------------------+-----------+------+-------------------------------------+-------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------+------------+---------+--------------------+---------+------------+-----------------------------------------------------------------------------------------+-----------+-----------+-----------------------------------+------------+-----------------
autovacuum | on | | Autovacuum | Starts the autovacuum subprocess. | | sighup | bool | configuration file | | | | on | on | /rdsdbdata/config/postgresql.conf | 78 | f
autovacuum_analyze_scale_factor | 0.05 | | Autovacuum | Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples. | | sighup | real | configuration file | 0 | 100 | | 0.1 | 0.05 | /rdsdbdata/config/postgresql.conf | 55 | f
autovacuum_analyze_threshold | 50 | | Autovacuum | Minimum number of tuple inserts, updates, or deletes prior to analyze. | | sighup | integer | default | 0 | 2147483647 | | 50 | 50 | | | f
autovacuum_freeze_max_age | 200000000 | | Autovacuum | Age at which to autovacuum a table to prevent transaction ID wraparound. | | postmaster | integer | default | 100000 | 2000000000 | | 200000000 | 200000000 | | | f
autovacuum_max_workers | 3 | | Autovacuum | Sets the maximum number of simultaneously running autovacuum worker processes. | | postmaster | integer | configuration file | 1 | 262143 | | 3 | 3 | /rdsdbdata/config/postgresql.conf | 45 | f
autovacuum_multixact_freeze_max_age | 400000000 | | Autovacuum | Multixact age at which to autovacuum a table to prevent multixact wraparound. | | postmaster | integer | default | 10000 | 2000000000 | | 400000000 | 400000000 | | | f
autovacuum_naptime | 5 | s | Autovacuum | Time to sleep between autovacuum runs. | | sighup | integer | configuration file | 1 | 2147483 | | 60 | 5 | /rdsdbdata/config/postgresql.conf | 9 | f
autovacuum_vacuum_cost_delay | 5 | ms | Autovacuum | Vacuum cost delay in milliseconds, for autovacuum. | | sighup | integer | configuration file | -1 | 100 | | 20 | 5 | /rdsdbdata/config/postgresql.conf | 73 | f
autovacuum_vacuum_cost_limit | -1 | | Autovacuum | Vacuum cost amount available before napping, for autovacuum. | | sighup | integer | default | -1 | 10000 | | -1 | -1 | | | f
autovacuum_vacuum_scale_factor | 0.1 | | Autovacuum | Number of tuple updates or deletes prior to vacuum as a fraction of reltuples. | | sighup | real | configuration file | 0 | 100 | | 0.2 | 0.1 | /rdsdbdata/config/postgresql.conf | 22 | f
autovacuum_vacuum_threshold | 50 | | Autovacuum | Minimum number of tuple updates or deletes prior to vacuum. | | sighup | integer | default | 0 | 2147483647 | | 50 | 50 | | | f
autovacuum_work_mem | -1 | kB | Resource Usage / Memory | Sets the maximum memory to be used by each autovacuum worker process. | | sighup | integer | default | -1 | 2147483647 | | -1 | -1 | | | f
log_autovacuum_min_duration | -1 | ms | Reporting and Logging / What to Log | Sets the minimum execution time above which autovacuum actions will be logged. | Zero prints all actions. -1 turns autovacuum logging off. | sighup | integer | default | -1 | 2147483647 | | -1 | -1 | | | f
rds.force_autovacuum_logging_level | disabled | | Customized Options | Emit autovacuum log messages irrespective of other logging configuration. | Each level includes all the levels that follow it.Set to disabled to disable this feature and fall back to using log_min_messages. | sighup | enum | default | | | {debug5,debug4,debug3,debug2,debug1,info,notice,warning,error,log,fatal,panic,disabled} | disabled | disabled | | | f
I would say you have some very long-lived snapshot being held. These tables need to be vacuumed, but the vacuum doesn't accomplish anything because the dead tuples can't be removed as some old snapshot still can see them. So immediately after being vacuumed, they are still eligible to be vacuumed again. So it tries again every 5 seconds (autovacuum_naptime), because autovacuum doesn't have a way to say "Don't bother until this snapshot which blocked me from accomplishing anything last time goes away"
Check pg_stat_activity for very old 'idle in transaction' and for any prepared transactions.
Evidently the Windows dos cmd "tasklist -v " is truncating lines after so many characters.
My perl program is reading in special command processes to compare against processes stored in my database. I am trying to make sure expected processes are running.
Unfortunately the script fails since one of my 50 or so processes is truncated by "tasklist -v".
Is there an alternative command?
Thanks,
Sammy
Following code demonstrates usage of tasklist /fo table command as a pipe input for further processing
Tip: help tasklist
use strict;
use warnings;
my $regex = qr/^(?<name>.*?)\s+(?<pid>\d+)\s+(?<session_name>\S+)\s+(?<session>\d+)\s+(?<mem>.*)/;
$^ = 'STDOUT_TOP';
open my $pipe, 'tasklist /fo table|';
/$regex/ && write for <$pipe>;
close $pipe;
$~ = 'STDOUT_BOTTOM';
write;
exit 0;
format STDOUT_TOP =
+----------------------------------+------------+----------+---------+-----------+
| Name | PID | SessName | Session | Memory |
+----------------------------------+------------+----------+---------+-----------+
.
format STDOUT =
| #<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< | #>>>>>>>>> | #<<<<<<< | #>>>>>> | #>>>>>>>> |
$+{name}, $+{pid}, $+{session_name}, $+{session}, $+{mem}
.
format STDOUT_BOTTOM =
+----------------------------------+------------+----------+---------+-----------+
.
Output
+----------------------------------+------------+----------+---------+-----------+
| Name | PID | SessName | Session | Memory |
+----------------------------------+------------+----------+---------+-----------+
| System Idle Process | 0 | Services | 0 | 8 K |
| System | 4 | Services | 0 | 7,452 K |
| Registry | 100 | Services | 0 | 28,664 K |
| smss.exe | 412 | Services | 0 | 368 K |
| csrss.exe | 552 | Services | 0 | 2,256 K |
| csrss.exe | 776 | Console | 1 | 2,496 K |
| wininit.exe | 796 | Services | 0 | 1,420 K |
| winlogon.exe | 860 | Console | 1 | 5,084 K |
| services.exe | 940 | Services | 0 | 5,964 K |
..............
| RuntimeBroker.exe | 7392 | Console | 1 | 8,604 K |
| dwm.exe | 1224 | Console | 1 | 70,144 K |
| chrome.exe | 10580 | Console | 1 | 103,584 K |
| svchost.exe | 12152 | Services | 0 | 7,496 K |
| LockApp.exe | 2620 | Console | 1 | 39,392 K |
| RuntimeBroker.exe | 3104 | Console | 1 | 30,508 K |
| chrome.exe | 452 | Console | 1 | 54,088 K |
| svchost.exe | 7460 | Services | 0 | 7,408 K |
| svchost.exe | 5744 | Services | 0 | 11,540 K |
♀+----------------------------------+------------+----------+---------+-----------+
| Name | PID | SessName | Session | Memory |
+----------------------------------+------------+----------+---------+-----------+
| WmiPrvSE.exe | 6200 | Services | 0 | 10,612 K |
| perl.exe | 2520 | Console | 1 | 8,948 K |
| tasklist.exe | 4808 | Console | 1 | 8,940 K |
+----------------------------------+------------+----------+---------+-----------+
I can see that it is possible to add metadata to a Rackspace virtual machine instance.
I want to get a list of running instances, filtered by a particular metatag value.
I can't see how to do so in the documentation however.
is it possible?
You should be able to do so using the openstack client... but it depends on which metatag you're interested in.
You can get a list of all servers:
openstack server list
Will spit something like
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| 97606ae9-7f18-4a3c-903a-1583d446119b | trysmallwin | ERROR | |
| cb78b8d5-2f03-4a3f-ab26-f389acbd0b76 | Win-try again | ERROR | public=2607:f298:5:101d:f816:3eff:fe9e:5cd4, 208.113.133.90, 2607:f298:5:101d:f816:3eff:fe36:da45, |
| | | | 208.113.133.93, 2607:f298:5:101d:f816:3eff:fe40:57d5, 208.113.133.95 |
| 040751d1-c4c5-47aa-8dec-1d69a468be1c | hnxhdkwskrvwvdwr | ACTIVE | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
note the ID of the server and investigate deeper:
openstack server show 040751d1-c4c5-47aa-8dec-1d69a468be1c
+--------------------------------------+------------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | iad-2 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-07-26T17:32:01.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
| config_drive | True |
| created | 2016-07-26T17:31:51Z |
| flavor | gp1.semisonic (50) |
| hostId | e1efd75d1e8f6a7f5bb228a35db13647281996087d39c65af8ce83d9 |
| id | 040751d1-c4c5-47aa-8dec-1d69a468be1c |
| image | Ubuntu-14.04 (03f89ff2-d66e-49f5-ae61-656a006bbbe9) |
| key_name | stef |
| name | hnxhdkwskrvwvdwr |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | d2fb6996496044158cf977c2129c8660 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | ACTIVE |
| updated | 2016-07-26T17:32:01Z |
| user_id | 5b2ca246f39a425f9a833460bf322603 |
+--------------------------------------+------------------------------------------------------------+
openstack --f json will output the same stuff but in json format that you can more easily manipulate programmatically.
HTH
GNU Emacs 24.4.1 org-mode
Here is an org-mode table
#+TBLNAME: revenue
| / | < | | < | | < | | | | | | | | | | | |
| Product | Year_SUM | Month_SUM | Platform | Platform_SUM | adwo | AdMob | adChina | adSage | appfigures | appdriver | coco | Domob | Dianru | Limei | guohead | youmi |
| | | | | | | | | | | | | | | | | |
|---------+----------+-----------+----------+------------------+------+-------+---------+--------+------------+-----------+------+-------+--------+-------+---------+-------|
| Jan | | | iOS | #ERROR | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | | | Android | =vsum($6..$>);NE | | 1 | | 1 | | 1 | | 1 | | 1 | | 1 |
|---------+----------+-----------+----------+------------------+------+-------+---------+--------+------------+-----------+------+-------+--------+-------+---------+-------|
| | | | | | | | | | | | | | | | | |
#+TBLFM: $5=vsum($6..$>);NE
As you see ,the formula $5=vsum($6..$>);NE can't be calculated! Here is debug info:
Substitution history of formula
Orig: vsum($6..$>)
$xyz-> vsum($6..$>)
#r$c-> vsum($6..$>)
$1-> vsum((0)..$>)
--------^
Error: Expected `)'
But if I replace the formula with $5=vsum($6..$17) and then it works ,I can't figure out where is the problem?
I need some help ,appreciate it!
In ssh, when I run this command
nova diagnostics 2ad0dda0-072d-46c4-8689-3c487a452248
I got all the resources in devstack
+---------------------------+----------------------+
| Property | Value |
+---------------------------+----------------------+
| cpu0_time | 3766640000000 |
| hdd_errors | 18446744073709551615 |
| hdd_read | 111736 |
| hdd_read_req | 73 |
| hdd_write | 0 |
| hdd_write_req | 0 |
| memory | 2097152 |
| memory-actual | 2097152 |
| memory-available | 1922544 |
| memory-major_fault | 2710 |
| memory-minor_fault | 10061504 |
| memory-rss | 509392 |
| memory-swap_in | 0 |
| memory-swap_out | 0 |
| memory-unused | 1079468 |
| tap5a148e0f-b8_rx | 959777 |
| tap5a148e0f-b8_rx_drop | 0 |
| tap5a148e0f-b8_rx_errors | 0 |
| tap5a148e0f-b8_rx_packets | 8758 |
| tap5a148e0f-b8_tx | 48872 |
| tap5a148e0f-b8_tx_drop | 0 |
| tap5a148e0f-b8_tx_errors | 0 |
| tap5a148e0f-b8_tx_packets | 615 |
| vda_errors | 18446744073709551615 |
| vda_read | 597230592 |
| vda_read_req | 31443 |
| vda_write | 164690944 |
| vda_write_req | 18422 |
+---------------------------+----------------------+
How can I get this in devstack user interfaces.
Please help..
Thanks in advance
its not available in openstack icehouse/juno version though it can be edited in juno to retrieve in devstack.
I didn't use openstack Kilo. In juno, if your hypervisor is libvirt, Vsphere or XenAPI then you can retrive this statistics in devstack UI. for this you have to do this:
For Libvirt
In this location ceilometer/compute/virt/libvirt/inspector.py, add this:
from oslo.utils import units
from ceilometer.compute.pollsters import util
def inspect_memory_usage(self, instance, duration=None):
instance_name = util.instance_name(instance)
domain = self._lookup_by_name(instance_name)
state = domain.info()[0]
if state == libvirt.VIR_DOMAIN_SHUTOFF:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'domain is in state of SHUTOFF'),
{'instance_name': instance_name})
return
try:
memory_stats = domain.memoryStats()
if (memory_stats and
memory_stats.get('available') and
memory_stats.get('unused')):
memory_used = (memory_stats.get('available') -
memory_stats.get('unused'))
# Stat provided from libvirt is in KB, converting it to MB.
memory_used = memory_used / units.Ki
return virt_inspector.MemoryUsageStats(usage=memory_used)
else:
LOG.warn(_('Failed to inspect memory usage of '
'%(instance_name)s, can not get info from libvirt'),
{'instance_name': instance_name})
# memoryStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
except libvirt.libvirtError as e:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'can not get info from libvirt: %(error)s'),
{'instance_name': instance_name, 'error': e})
for more details you can check the following link:
https://review.openstack.org/#/c/90498/