I'm using osquery to monitor servers on my network. The following osquery.conf captures snapshots, every minute, of the processes communicating over the network ports and publishes that data to Kafka:
{
"options": {
"logger_kafka_brokers": "cp01.woolford.io:9092,cp02.woolford.io:9092,cp03.woolford.io:9092",
"logger_kafka_topic": "base_topic",
"logger_kafka_acks": "1"
},
"packs": {
"system-snapshot": {
"queries": {
"processes_by_port": {
"query": "select u.username, p.pid, p.name, pos.local_address, pos.local_port, pos.remote_address, pos.remote_port from processes p join users u on u.uid = p.uid join process_open_sockets pos on pos.pid=p.pid where pos.remote_port != '0'",
"interval": 60,
"snapshot": true
}
}
}
},
"kafka_topics": {
"process-port": [
"pack_system-snapshot_processes_by_port"
]
}
}
Here's an example of the output from the query:
osquery> select u.username, p.pid, p.name, pos.local_address, pos.local_port, pos.remote_address, pos.remote_port from processes p join users u on u.uid = p.uid join process_open_sockets pos on pos.pid=p.pid where pos.remote_port != '0';
+--------------------+-------+---------------+------------------+------------+------------------+-------------+
| username | pid | name | local_address | local_port | remote_address | remote_port |
+--------------------+-------+---------------+------------------+------------+------------------+-------------+
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 49018 | 10.0.1.41 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 49028 | 10.0.1.41 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 49026 | 10.0.1.41 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 50558 | 10.0.1.43 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 50554 | 10.0.1.43 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 49014 | 10.0.1.41 | 9092 |
| root | 1505 | sssd_be | 10.0.1.41 | 46436 | 10.0.1.89 | 389 |
...
| cp-ksql | 1757 | java | 10.0.1.41 | 56180 | 10.0.1.41 | 9092 |
| cp-ksql | 1757 | java | 10.0.1.41 | 53878 | 10.0.1.43 | 9092 |
| root | 19684 | sshd | 10.0.1.41 | 22 | 10.0.1.53 | 50238 |
| root | 24082 | sshd | 10.0.1.41 | 22 | 10.0.1.53 | 51233 |
| root | 24107 | java | 10.0.1.41 | 56052 | 10.0.1.41 | 9092 |
| root | 24107 | java | 10.0.1.41 | 56054 | 10.0.1.41 | 9092 |
| cp-schema-registry | 24694 | java | 10.0.1.41 | 50742 | 10.0.1.31 | 2181 |
| cp-schema-registry | 24694 | java | 10.0.1.41 | 47150 | 10.0.1.42 | 9093 |
| cp-schema-registry | 24694 | java | 10.0.1.41 | 58068 | 10.0.1.41 | 9093 |
| cp-schema-registry | 24694 | java | 10.0.1.41 | 47152 | 10.0.1.42 | 9093 |
| root | 25782 | osqueryd | 10.0.1.41 | 57700 | 10.0.1.43 | 9092 |
| root | 25782 | osqueryd | 10.0.1.41 | 56188 | 10.0.1.41 | 9092 |
+--------------------+-------+---------------+------------------+------------+------------------+-------------+
Instead of snapshots, I'd like osquery to capture differentials, i.e. to only publish the changes to Kafka.
I tried toggling the snapshot property from true to false. My expectation was that osquery would send the changes. For some reason, when I set "snapshot": false, no data is published to the process-port topic. Instead, all the data is routed to the catchall base_topic.
Can you see what I'm doing wrong?
Update:
I think I'm running into this bug: https://github.com/osquery/osquery/issues/5559
Here's a video walk-through: https://youtu.be/sPdlBBKgJmY
I filed a bug report, with steps to reproduce, in case it's not the same issue: https://github.com/osquery/osquery/issues/5890
Given the context, I can't immediately tell what is causing the issue you are experiencing.
In order to debug this, I would first try using the filesystem logger plugin instead of (or in addition to) the Kafka logger.
Do you get results to the Kafka topic when the query is configured as a snapshot? If so, are you able to verify that the results are actually changing such that a diff should be generate when the query runs in differential mode?
Can you see results logged locally when you use --logger_plugin=filesystem,kafka?
Related
I am using Ubuntu Linux. I am trying to use Postgres as database. It is doing fine when I created a user:
CREATE USER username;
But when I try to create a database, it returns nothing:
CREATE DATABASE databasename;
What is happening with my Postgres?
datid | datname | pid | leader_pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | wait_event_type | wait_event | state | backend_xid | backend_xmin | query_id | query | backend_type
-------+----------+------+------------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+---------------------+--------+-------------+--------------+----------+---------------------------------+------------------------------
| | 8237 | | | | | | | | 2022-02-02 13:00:47.683187+07 | | | | Activity | AutoVacuumMain | | | | | | autovacuum launcher
| | 8239 | | 10 | postgres | | | | | 2022-02-02 13:00:47.70127+07 | | | | Activity | LogicalLauncherMain | | | | | | logical replication launcher
13726 | postgres | 8329 | | 10 | postgres | psql | | | -1 | 2022-02-02 13:08:52.250244+07 | 2022-02-02 13:09:10.651383+07 | 2022-02-02 13:09:10.651383+07 | 2022-02-02 13:09:10.651393+07 | Lock | object | active | | 740 | | CREATE DATABASE kong; | client backend
13726 | postgres | 8313 | | 10 | postgres | psql | | | -1 | 2022-02-02 13:04:57.265085+07 | 2022-02-02 13:10:40.097817+07 | 2022-02-02 13:10:40.097817+07 | 2022-02-02 13:10:40.09782+07 | | | active | | 740 | | SELECT * FROM pg_stat_activity; | client backend
| | 8235 | | | | | | | | 2022-02-02 13:00:47.664058+07 | | | | Activity | BgWriterHibernate | | | | | | background writer
| | 8234 | | | | | | | | 2022-02-02 13:00:47.654713+07 | | | | Activity | CheckpointerMain | | | | | | checkpointer
| | 8236 | | | | | | | | 2022-02-02 13:00:47.673631+07 | | | | Activity | WalWriterMain | | | | | | walwriter
(7 rows)
(END)
and for the pg_locks
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart
------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+------------------+---------+----------+-------------------------------
relation | 13726 | 12290 | | | | | | | | 7/17 | 8313 | AccessShareLock | t | t |
virtualxid | | | | | 7/17 | | | | | 7/17 | 8313 | ExclusiveLock | t | t |
virtualxid | | | | | 3/15 | | | | | 3/15 | 8329 | ExclusiveLock | t | t |
virtualxid | | | | | 6/12 | | | | | 6/12 | 8335 | ExclusiveLock | t | t |
virtualxid | | | | | 5/3 | | | | | 5/3 | 8266 | ExclusiveLock | t | t |
virtualxid | | | | | 4/1 | | | | | 4/1 | 8264 | ExclusiveLock | t | t |
object | 0 | | | | | | 1262 | 1 | 0 | 6/12 | 8335 | RowExclusiveLock | f | f | 2022-02-02 13:09:30.561821+07
object | 0 | | | | | | 1262 | 1 | 0 | 3/15 | 8329 | ShareLock | f | f | 2022-02-02 13:09:10.651571+07
object | 0 | | | | | | 1262 | 1 | 0 | 4/1 | 8264 | RowExclusiveLock | t | f |
relation | 0 | 1262 | | | | | | | | 3/15 | 8329 | AccessShareLock | t | f |
object | 0 | | | | | | 1262 | 1 | 0 | 5/3 | 8266 | RowExclusiveLock | t | f |
(11 rows)
(END)
Database info
postgres=# \l
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------+----------+----------+---------+---------+-----------------------+---------+------------+--------------------------------------------
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | 8529 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +| 8377 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +| 8529 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
(3 rows)
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
kong | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
postgres=#
Using the same database name and user is not good practice. This may result in various errors.
When you call the command
CREATE DATABASE databaseName;
PostgreSql creates a database. This may take some time. After creating the database, you will receive a message:
CREATE DATABASE
postgres=#
Problem solved by reinstalling the pg to its old version (was installed 14, downgrade to 12 and it solved). Thanks to everyone here who helped me
A two node distributed OrientDB system, embedded mode, using TCP-IP for node discovery. The class event is sharded on four clusters. After restarting one node, exactly half of the inserts on that node fail with the error message:
INFO Local node 'orientdb-lab-node2' is not the owner for cluster 'event_1' (it is 'orientdb-lab-node1'). Reloading distributed configuration for database 'test-db' [ODistributedStorage]
and the stack trace:
com.orientechnologies.orient.server.distributed.ODistributedConfigurationChangedException: Local node 'orientdb-lab-node2' is not the owner for cluster 'event_1' (it is 'orientdb-lab-node1')
DB name="test-db"
DB name="test-db"
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.throwSerializedException(OChannelBinaryAsynchClient.java:437)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.handleStatus(OChannelBinaryAsynchClient.java:388)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:270)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:162)
at com.orientechnologies.orient.client.remote.OStorageRemote.beginResponse(OStorageRemote.java:2138)
at com.orientechnologies.orient.client.remote.OStorageRemote$6.execute(OStorageRemote.java:548)
at com.orientechnologies.orient.client.remote.OStorageRemote$6.execute(OStorageRemote.java:542)
at com.orientechnologies.orient.client.remote.OStorageRemote$1.execute(OStorageRemote.java:164)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:235)
at com.orientechnologies.orient.client.remote.OStorageRemote.asyncNetworkOperation(OStorageRemote.java:156)
at com.orientechnologies.orient.client.remote.OStorageRemote.createRecord(OStorageRemote.java:528)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeSaveRecord(ODatabaseDocumentTx.java:2095)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveNew(OTransactionNoTx.java:246)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveRecord(OTransactionNoTx.java:179)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.save(ODatabaseDocumentTx.java:2597)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.save(ODatabaseDocumentTx.java:103)
at com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1802)
at com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1793)
at lab.orientdb.OrientDbClient.insert(OrientDbClient.java:10)
at lab.orientdb.Main.main(Main.java:24)
This is what the cluster configuration looks like from node1:
Node 1 and 2 running, 10 inserts on each node
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 8|orientdb-lab-node2|[orientdb-lab-node1]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 7|orientdb-lab-node2|[orientdb-lab-node1]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 20| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Node 2 stopped
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 8|orientdb-lab-node1|[orientdb-lab-node2]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 7|orientdb-lab-node1|[orientdb-lab-node2]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 20| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Node 2 restarted, 5 successful inserts and 5 failed
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 11|orientdb-lab-node2|[orientdb-lab-node1]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 9|orientdb-lab-node2|[orientdb-lab-node1]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 25| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Any tip or advice appreciated. Thanks.
This issue has been resolved on OrientDB 2.2.13-SNAPSHOT, so should be ok in a release version very soon: https://github.com/orientechnologies/orientdb/issues/6897
I can see that it is possible to add metadata to a Rackspace virtual machine instance.
I want to get a list of running instances, filtered by a particular metatag value.
I can't see how to do so in the documentation however.
is it possible?
You should be able to do so using the openstack client... but it depends on which metatag you're interested in.
You can get a list of all servers:
openstack server list
Will spit something like
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| 97606ae9-7f18-4a3c-903a-1583d446119b | trysmallwin | ERROR | |
| cb78b8d5-2f03-4a3f-ab26-f389acbd0b76 | Win-try again | ERROR | public=2607:f298:5:101d:f816:3eff:fe9e:5cd4, 208.113.133.90, 2607:f298:5:101d:f816:3eff:fe36:da45, |
| | | | 208.113.133.93, 2607:f298:5:101d:f816:3eff:fe40:57d5, 208.113.133.95 |
| 040751d1-c4c5-47aa-8dec-1d69a468be1c | hnxhdkwskrvwvdwr | ACTIVE | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
note the ID of the server and investigate deeper:
openstack server show 040751d1-c4c5-47aa-8dec-1d69a468be1c
+--------------------------------------+------------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | iad-2 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-07-26T17:32:01.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
| config_drive | True |
| created | 2016-07-26T17:31:51Z |
| flavor | gp1.semisonic (50) |
| hostId | e1efd75d1e8f6a7f5bb228a35db13647281996087d39c65af8ce83d9 |
| id | 040751d1-c4c5-47aa-8dec-1d69a468be1c |
| image | Ubuntu-14.04 (03f89ff2-d66e-49f5-ae61-656a006bbbe9) |
| key_name | stef |
| name | hnxhdkwskrvwvdwr |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | d2fb6996496044158cf977c2129c8660 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | ACTIVE |
| updated | 2016-07-26T17:32:01Z |
| user_id | 5b2ca246f39a425f9a833460bf322603 |
+--------------------------------------+------------------------------------------------------------+
openstack --f json will output the same stuff but in json format that you can more easily manipulate programmatically.
HTH
In ssh, when I run this command
nova diagnostics 2ad0dda0-072d-46c4-8689-3c487a452248
I got all the resources in devstack
+---------------------------+----------------------+
| Property | Value |
+---------------------------+----------------------+
| cpu0_time | 3766640000000 |
| hdd_errors | 18446744073709551615 |
| hdd_read | 111736 |
| hdd_read_req | 73 |
| hdd_write | 0 |
| hdd_write_req | 0 |
| memory | 2097152 |
| memory-actual | 2097152 |
| memory-available | 1922544 |
| memory-major_fault | 2710 |
| memory-minor_fault | 10061504 |
| memory-rss | 509392 |
| memory-swap_in | 0 |
| memory-swap_out | 0 |
| memory-unused | 1079468 |
| tap5a148e0f-b8_rx | 959777 |
| tap5a148e0f-b8_rx_drop | 0 |
| tap5a148e0f-b8_rx_errors | 0 |
| tap5a148e0f-b8_rx_packets | 8758 |
| tap5a148e0f-b8_tx | 48872 |
| tap5a148e0f-b8_tx_drop | 0 |
| tap5a148e0f-b8_tx_errors | 0 |
| tap5a148e0f-b8_tx_packets | 615 |
| vda_errors | 18446744073709551615 |
| vda_read | 597230592 |
| vda_read_req | 31443 |
| vda_write | 164690944 |
| vda_write_req | 18422 |
+---------------------------+----------------------+
How can I get this in devstack user interfaces.
Please help..
Thanks in advance
its not available in openstack icehouse/juno version though it can be edited in juno to retrieve in devstack.
I didn't use openstack Kilo. In juno, if your hypervisor is libvirt, Vsphere or XenAPI then you can retrive this statistics in devstack UI. for this you have to do this:
For Libvirt
In this location ceilometer/compute/virt/libvirt/inspector.py, add this:
from oslo.utils import units
from ceilometer.compute.pollsters import util
def inspect_memory_usage(self, instance, duration=None):
instance_name = util.instance_name(instance)
domain = self._lookup_by_name(instance_name)
state = domain.info()[0]
if state == libvirt.VIR_DOMAIN_SHUTOFF:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'domain is in state of SHUTOFF'),
{'instance_name': instance_name})
return
try:
memory_stats = domain.memoryStats()
if (memory_stats and
memory_stats.get('available') and
memory_stats.get('unused')):
memory_used = (memory_stats.get('available') -
memory_stats.get('unused'))
# Stat provided from libvirt is in KB, converting it to MB.
memory_used = memory_used / units.Ki
return virt_inspector.MemoryUsageStats(usage=memory_used)
else:
LOG.warn(_('Failed to inspect memory usage of '
'%(instance_name)s, can not get info from libvirt'),
{'instance_name': instance_name})
# memoryStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
except libvirt.libvirtError as e:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'can not get info from libvirt: %(error)s'),
{'instance_name': instance_name, 'error': e})
for more details you can check the following link:
https://review.openstack.org/#/c/90498/
For some reason, I cannot get HandlerSocket to start listening when I start mariadb (version
10.0.14). I am using Cent OS 6.5.
my.cnf has the following settings:
handlersocket_port = 9998
handlersocket_port_wr = 9999
handlersocket_address = 127.0.0.1
Calling "SHOW GLOBAL VARIABLES LIKE 'handlersocket%'" from the mariaDb prompt shows:
+-------------------------------+-----------+
| Variable_name | Value |
+-------------------------------+-----------+
| handlersocket_accept_balance | 0 |
| handlersocket_address | 127.0.0.1 |
| handlersocket_backlog | 32768 |
| handlersocket_epoll | 1 |
| handlersocket_plain_secret | |
| handlersocket_plain_secret_wr | |
| handlersocket_port | 9998 |
| handlersocket_port_wr | 9999 |
| handlersocket_rcvbuf | 0 |
| handlersocket_readsize | 0 |
| handlersocket_sndbuf | 0 |
| handlersocket_threads | 16 |
| handlersocket_threads_wr | 1 |
| handlersocket_timeout | 300 |
| handlersocket_verbose | 10 |
| handlersocket_wrlock_timeout | 12 |
+-------------------------------+-----------+
I can start mariadb successfully, but when I check to see which ports are actively listening,
neither 9998 nor 9999 show up. I've checked the mysqld.log file, but no errors seem to be occurring.
Answering my own question here -
SELINUX needed to be set to permissive mode to get HandlerSocket started.