OrientDB distributed mode : data not getting distributed across various nodes - orientdb

I have started an OrientDB Enterprise 2.7 with two nodes. Here is how my setup look.
CONFIGURED SERVERS
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|# |Name |Status|Connections|StartedOn |Binary |HTTP |UsedMemory |FreeMemory |MaxMemory|
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|0 |Batman|ONLINE|3 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|480.98MB (94.49%)|28.02MB (5.51%) |509.00MB |
|1 |Robin |ONLINE|3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |403.50MB (79.35%)|105.00MB (20.65%)|508.50MB |
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
orientdb {db=SocialPosts3}> clusters
Now I have two Vertex classes User and Notes. With an edge type Posted. All Vertex and Edges have properties. There are also unique index on both the Vertex class.
I started pushing data using Java API :
while (retry++ != MAX_RETRY) {
try {
properties.put(uniqueIndexname, uniqueIndexValue);
Iterable<Vertex> resultset = graph.getVertices(className, new String[] { uniqueIndexname },
new Object[] { uniqueIndexValue });
if (resultset != null) {
vertex = resultset.iterator().hasNext() ? resultset.iterator().next() : null;
}
if (vertex == null) {
vertex = graph.addVertex("class:" + className, properties);
graph.commit();
return vertex;
} else {
for (String key : properties.keySet()) {
vertex.setProperty(key, properties.get(key));
}
}
logger.info("Completed upserting vertex " + uniqueIndexValue);
graph.commit();
break;
} catch (ONeedRetryException ex) {
logger.warn("Retry for exception - " + uniqueIndexValue);
} catch (Exception e) {
logger.error("Can not create vertex - " + e.getMessage());
graph.rollback();
break;
}
}
Similarly for the Notes and edges.
I populate around 200k user and 3.5M Notes. Now I notice that all the data is going only to one node.
On running "clusters" command I see that all the clusters are created on the same node, and hence all data is present only on one node.
|22 |note | 26|Note | | 75| Robin | [Batman] | true |
|23 |note_1 | 27|Note | |1750902| Batman | [Robin] | true |
|24 |note_2 | 28|Note | |1750789| Batman | [Robin] | true |
|25 |note_3 | 29|Note | | 75| Robin | [Batman] | true |
|26 |posted | 34|Posted | | 0| Robin | [Batman] | true |
|27 |posted_1 | 35|Posted | | 1| Robin | [Batman] | true |
|28 |posted_2 | 36|Posted | |1739823| Batman | [Robin] | true |
|29 |posted_3 | 37|Posted | |1749250| Batman | [Robin] | true |
|30 |user | 30|User | | 102059| Batman | [Robin] | true |
|31 |user_1 | 31|User | | 1| Robin | [Batman] | true |
|32 |user_2 | 32|User | | 0| Robin | [Batman] | true |
|33 |user_3 | 33|User | | 102127| Batman | [Robin] | true |
I see the CPU of one node is like 99% and other is <1%.
How can I make sure that data is uniformly distributed across all nodes in the cluster?
Update:
Database is propagated to both the nodes. I can login to both the node studio and see the listed database. Also querying any node gives same results, so nodes are in sync.
Server Log from one of the node, and it is almost same on the other node.
2016-08-18 19:28:49:668 INFO [Robin]<-[Batman] Received new status Batman.SocialPosts3=SYNCHRONIZING [OHazelcastPlugin]
2016-08-18 19:28:49:670 INFO [Robin] Current node started as MASTER for database 'SocialPosts3' [OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=2)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+
| | | | MASTER |
| | | |SYNCHRONIZING|
+--------+-----------+----------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman |
+--------+-----------+----------+-------------+
|* | 1 | 1 | X |
|internal| 1 | 1 | |
+--------+-----------+----------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:766 INFO [Robin] Adding node 'Robin' in partition: SocialPosts3 db=[*] v=3 [ODistributedDatabaseImpl$1]
2016-08-18 19:28:49:767 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+-------------+
| | | | MASTER | MASTER |
| | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+-----------+----------+-------------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman | Robin |
+--------+-----------+----------+-------------+-------------+
|* | 2 | 1 | X | o |
|internal| 2 | 1 | | |
+--------+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:767 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:769 WARNI [Robin]->[[Batman]] Requesting deploy of database 'SocialPosts3' on local server... [OHazelcastPlugin]
2016-08-18 19:28:52:192 INFO [Robin]<-[Batman] Copying remote database 'SocialPosts3' to: /tmp/orientdb/install_SocialPosts3.zip [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin]<-[Batman] Installing database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3... [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin] - writing chunk #1 offset=0 size=43.38KB [OHazelcastPlugin]
2016-08-18 19:28:52:194 INFO [Robin] Database copied correctly, size=43.38KB [ODistributedAbstractPlugin$3]
2016-08-18 19:28:52:279 WARNI {db=SocialPosts3} Storage 'SocialPosts3' was not closed properly. Will try to recover from write ahead log [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 SEVER {db=SocialPosts3} Restore is not possible because write ahead log is empty. [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 INFO {db=SocialPosts3} Storage data recover was completed [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:294 INFO {db=SocialPosts3} [Robin] Installed database 'SocialPosts3' (LSN=OLogSequenceNumber{segment=0, position=24}) [OHazelcastPlugin]
2016-08-18 19:28:52:304 INFO [Robin] Reassigning cluster ownership for database SocialPosts3 [OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+----+-----------+----------+-------------+-------------+
| | | | | MASTER | MASTER |
| | | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+----+-----------+----------+-------------+-------------+
|CLUSTER | id|writeQuorum|readQuorum| Batman | Robin |
+--------+----+-----------+----------+-------------+-------------+
|* | | 2 | 1 | X | o |
|internal| 0| 2 | 1 | | |
+--------+----+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] Distributed servers status:
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Name |Status|Databases |Conns|StartedOn |Binary |HTTP |UsedMemory |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Batman|ONLINE|GoodBoys=ONLINE (MASTER) |5 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|426.47MB/509.00MB (83.79%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
|Robin*|ONLINE|GoodBoys=ONLINE (MASTER) |3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |353.77MB/507.50MB (69.71%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
| | |SocialPosts3=SYNCHRONIZING (MASTER) | | | | | |
| | |SocialPosts2=ONLINE (MASTER) | | | | | |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+

Related

Postgres cannot create database but can create a user

I am using Ubuntu Linux. I am trying to use Postgres as database. It is doing fine when I created a user:
CREATE USER username;
But when I try to create a database, it returns nothing:
CREATE DATABASE databasename;
What is happening with my Postgres?
datid | datname | pid | leader_pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | wait_event_type | wait_event | state | backend_xid | backend_xmin | query_id | query | backend_type
-------+----------+------+------------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+---------------------+--------+-------------+--------------+----------+---------------------------------+------------------------------
| | 8237 | | | | | | | | 2022-02-02 13:00:47.683187+07 | | | | Activity | AutoVacuumMain | | | | | | autovacuum launcher
| | 8239 | | 10 | postgres | | | | | 2022-02-02 13:00:47.70127+07 | | | | Activity | LogicalLauncherMain | | | | | | logical replication launcher
13726 | postgres | 8329 | | 10 | postgres | psql | | | -1 | 2022-02-02 13:08:52.250244+07 | 2022-02-02 13:09:10.651383+07 | 2022-02-02 13:09:10.651383+07 | 2022-02-02 13:09:10.651393+07 | Lock | object | active | | 740 | | CREATE DATABASE kong; | client backend
13726 | postgres | 8313 | | 10 | postgres | psql | | | -1 | 2022-02-02 13:04:57.265085+07 | 2022-02-02 13:10:40.097817+07 | 2022-02-02 13:10:40.097817+07 | 2022-02-02 13:10:40.09782+07 | | | active | | 740 | | SELECT * FROM pg_stat_activity; | client backend
| | 8235 | | | | | | | | 2022-02-02 13:00:47.664058+07 | | | | Activity | BgWriterHibernate | | | | | | background writer
| | 8234 | | | | | | | | 2022-02-02 13:00:47.654713+07 | | | | Activity | CheckpointerMain | | | | | | checkpointer
| | 8236 | | | | | | | | 2022-02-02 13:00:47.673631+07 | | | | Activity | WalWriterMain | | | | | | walwriter
(7 rows)
(END)
and for the pg_locks
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart
------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+------------------+---------+----------+-------------------------------
relation | 13726 | 12290 | | | | | | | | 7/17 | 8313 | AccessShareLock | t | t |
virtualxid | | | | | 7/17 | | | | | 7/17 | 8313 | ExclusiveLock | t | t |
virtualxid | | | | | 3/15 | | | | | 3/15 | 8329 | ExclusiveLock | t | t |
virtualxid | | | | | 6/12 | | | | | 6/12 | 8335 | ExclusiveLock | t | t |
virtualxid | | | | | 5/3 | | | | | 5/3 | 8266 | ExclusiveLock | t | t |
virtualxid | | | | | 4/1 | | | | | 4/1 | 8264 | ExclusiveLock | t | t |
object | 0 | | | | | | 1262 | 1 | 0 | 6/12 | 8335 | RowExclusiveLock | f | f | 2022-02-02 13:09:30.561821+07
object | 0 | | | | | | 1262 | 1 | 0 | 3/15 | 8329 | ShareLock | f | f | 2022-02-02 13:09:10.651571+07
object | 0 | | | | | | 1262 | 1 | 0 | 4/1 | 8264 | RowExclusiveLock | t | f |
relation | 0 | 1262 | | | | | | | | 3/15 | 8329 | AccessShareLock | t | f |
object | 0 | | | | | | 1262 | 1 | 0 | 5/3 | 8266 | RowExclusiveLock | t | f |
(11 rows)
(END)
Database info
postgres=# \l
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------+----------+----------+---------+---------+-----------------------+---------+------------+--------------------------------------------
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | 8529 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +| 8377 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +| 8529 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
(3 rows)
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
kong | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
postgres=#
Using the same database name and user is not good practice. This may result in various errors.
When you call the command
CREATE DATABASE databaseName;
PostgreSql creates a database. This may take some time. After creating the database, you will receive a message:
CREATE DATABASE
postgres=#
Problem solved by reinstalling the pg to its old version (was installed 14, downgrade to 12 and it solved). Thanks to everyone here who helped me

Aggregating data in an array based on date

I'm trying to aggregate data based on timestamp. Basically I'd like to create an array for each day.
So lets say I've a query like so:
SELECT date(task_start) AS started, task_start
FROM tt_records
GROUP BY started, task_start
ORDER BY started DESC;
The output is:
+------------+------------------------+
| started | task_start |
|------------+------------------------|
| 2021-08-30 | 2021-08-30 16:45:55+00 |
| 2021-08-29 | 2021-08-29 06:47:55+00 |
| 2021-08-29 | 2021-08-29 15:41:50+00 |
| 2021-08-28 | 2021-08-28 12:59:20+00 |
| 2021-08-28 | 2021-08-28 14:50:55+00 |
| 2021-08-26 | 2021-08-26 20:46:44+00 |
| 2021-08-24 | 2021-08-24 16:28:05+00 |
| 2021-08-23 | 2021-08-23 16:22:41+00 |
| 2021-08-22 | 2021-08-22 14:01:10+00 |
| 2021-08-21 | 2021-08-21 19:45:18+00 |
| 2021-08-11 | 2021-08-11 16:08:58+00 |
| 2021-07-28 | 2021-07-28 17:39:14+00 |
| 2021-07-19 | 2021-07-19 17:26:24+00 |
| 2021-07-18 | 2021-07-18 15:04:47+00 |
| 2021-06-24 | 2021-06-24 19:53:33+00 |
| 2021-06-22 | 2021-06-22 19:04:24+00 |
+------------+------------------------+
As you can see the started column has repeating dates.
What I'd like to have is:
+------------+--------------------------------------------------+
| started | task_start |
|------------+--------------------------------------------------|
| 2021-08-30 | [2021-08-30 16:45:55+00] |
| 2021-08-29 | [2021-08-29 06:47:55+00, 2021-08-29 15:41:50+00] |
| 2021-08-28 | [2021-08-28 12:59:20+00, 2021-08-28 14:50:55+00] |
| 2021-08-26 | [2021-08-26 20:46:44+00] |
| 2021-08-24 | [2021-08-24 16:28:05+00] |
| 2021-08-23 | [2021-08-23 16:22:41+00] |
| 2021-08-22 | [2021-08-22 14:01:10+00] |
| 2021-08-21 | [2021-08-21 19:45:18+00] |
| 2021-08-11 | [2021-08-11 16:08:58+00] |
| 2021-07-28 | [2021-07-28 17:39:14+00] |
| 2021-07-19 | [2021-07-19 17:26:24+00] |
| 2021-07-18 | [2021-07-18 15:04:47+00] |
| 2021-06-24 | [2021-06-24 19:53:33+00] |
| 2021-06-22 | [2021-06-22 19:04:24+00] |
+------------+--------------------------------------------------+
I need a query to achieve that. Thank you.
You can use array_agg()
SELECT date(task_start) AS started, array_agg(task_start)
FROM tt_records
GROUP BY started
ORDER BY started DESC;
If you want a JSON array, rather than a native Postgres array, use jsonb_agg() instead

Inserts (JavaAPI) fail after restarting node in distributed OrientDB cluster

A two node distributed OrientDB system, embedded mode, using TCP-IP for node discovery. The class event is sharded on four clusters. After restarting one node, exactly half of the inserts on that node fail with the error message:
INFO Local node 'orientdb-lab-node2' is not the owner for cluster 'event_1' (it is 'orientdb-lab-node1'). Reloading distributed configuration for database 'test-db' [ODistributedStorage]
and the stack trace:
com.orientechnologies.orient.server.distributed.ODistributedConfigurationChangedException: Local node 'orientdb-lab-node2' is not the owner for cluster 'event_1' (it is 'orientdb-lab-node1')
DB name="test-db"
DB name="test-db"
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.throwSerializedException(OChannelBinaryAsynchClient.java:437)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.handleStatus(OChannelBinaryAsynchClient.java:388)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:270)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:162)
at com.orientechnologies.orient.client.remote.OStorageRemote.beginResponse(OStorageRemote.java:2138)
at com.orientechnologies.orient.client.remote.OStorageRemote$6.execute(OStorageRemote.java:548)
at com.orientechnologies.orient.client.remote.OStorageRemote$6.execute(OStorageRemote.java:542)
at com.orientechnologies.orient.client.remote.OStorageRemote$1.execute(OStorageRemote.java:164)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:235)
at com.orientechnologies.orient.client.remote.OStorageRemote.asyncNetworkOperation(OStorageRemote.java:156)
at com.orientechnologies.orient.client.remote.OStorageRemote.createRecord(OStorageRemote.java:528)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeSaveRecord(ODatabaseDocumentTx.java:2095)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveNew(OTransactionNoTx.java:246)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveRecord(OTransactionNoTx.java:179)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.save(ODatabaseDocumentTx.java:2597)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.save(ODatabaseDocumentTx.java:103)
at com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1802)
at com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1793)
at lab.orientdb.OrientDbClient.insert(OrientDbClient.java:10)
at lab.orientdb.Main.main(Main.java:24)
This is what the cluster configuration looks like from node1:
Node 1 and 2 running, 10 inserts on each node
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 8|orientdb-lab-node2|[orientdb-lab-node1]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 7|orientdb-lab-node2|[orientdb-lab-node1]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 20| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Node 2 stopped
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 8|orientdb-lab-node1|[orientdb-lab-node2]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 7|orientdb-lab-node1|[orientdb-lab-node2]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 20| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Node 2 restarted, 5 successful inserts and 5 failed
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 11|orientdb-lab-node2|[orientdb-lab-node1]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 9|orientdb-lab-node2|[orientdb-lab-node1]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 25| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Any tip or advice appreciated. Thanks.
This issue has been resolved on OrientDB 2.2.13-SNAPSHOT, so should be ok in a release version very soon: https://github.com/orientechnologies/orientdb/issues/6897

How to list Rackspace servers filtered by metadata using REST API?

I can see that it is possible to add metadata to a Rackspace virtual machine instance.
I want to get a list of running instances, filtered by a particular metatag value.
I can't see how to do so in the documentation however.
is it possible?
You should be able to do so using the openstack client... but it depends on which metatag you're interested in.
You can get a list of all servers:
openstack server list
Will spit something like
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| 97606ae9-7f18-4a3c-903a-1583d446119b | trysmallwin | ERROR | |
| cb78b8d5-2f03-4a3f-ab26-f389acbd0b76 | Win-try again | ERROR | public=2607:f298:5:101d:f816:3eff:fe9e:5cd4, 208.113.133.90, 2607:f298:5:101d:f816:3eff:fe36:da45, |
| | | | 208.113.133.93, 2607:f298:5:101d:f816:3eff:fe40:57d5, 208.113.133.95 |
| 040751d1-c4c5-47aa-8dec-1d69a468be1c | hnxhdkwskrvwvdwr | ACTIVE | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
note the ID of the server and investigate deeper:
openstack server show 040751d1-c4c5-47aa-8dec-1d69a468be1c
+--------------------------------------+------------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | iad-2 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-07-26T17:32:01.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
| config_drive | True |
| created | 2016-07-26T17:31:51Z |
| flavor | gp1.semisonic (50) |
| hostId | e1efd75d1e8f6a7f5bb228a35db13647281996087d39c65af8ce83d9 |
| id | 040751d1-c4c5-47aa-8dec-1d69a468be1c |
| image | Ubuntu-14.04 (03f89ff2-d66e-49f5-ae61-656a006bbbe9) |
| key_name | stef |
| name | hnxhdkwskrvwvdwr |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | d2fb6996496044158cf977c2129c8660 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | ACTIVE |
| updated | 2016-07-26T17:32:01Z |
| user_id | 5b2ca246f39a425f9a833460bf322603 |
+--------------------------------------+------------------------------------------------------------+
openstack --f json will output the same stuff but in json format that you can more easily manipulate programmatically.
HTH

nova diagnostics in devstack development

In ssh, when I run this command
nova diagnostics 2ad0dda0-072d-46c4-8689-3c487a452248
I got all the resources in devstack
+---------------------------+----------------------+
| Property | Value |
+---------------------------+----------------------+
| cpu0_time | 3766640000000 |
| hdd_errors | 18446744073709551615 |
| hdd_read | 111736 |
| hdd_read_req | 73 |
| hdd_write | 0 |
| hdd_write_req | 0 |
| memory | 2097152 |
| memory-actual | 2097152 |
| memory-available | 1922544 |
| memory-major_fault | 2710 |
| memory-minor_fault | 10061504 |
| memory-rss | 509392 |
| memory-swap_in | 0 |
| memory-swap_out | 0 |
| memory-unused | 1079468 |
| tap5a148e0f-b8_rx | 959777 |
| tap5a148e0f-b8_rx_drop | 0 |
| tap5a148e0f-b8_rx_errors | 0 |
| tap5a148e0f-b8_rx_packets | 8758 |
| tap5a148e0f-b8_tx | 48872 |
| tap5a148e0f-b8_tx_drop | 0 |
| tap5a148e0f-b8_tx_errors | 0 |
| tap5a148e0f-b8_tx_packets | 615 |
| vda_errors | 18446744073709551615 |
| vda_read | 597230592 |
| vda_read_req | 31443 |
| vda_write | 164690944 |
| vda_write_req | 18422 |
+---------------------------+----------------------+
How can I get this in devstack user interfaces.
Please help..
Thanks in advance
its not available in openstack icehouse/juno version though it can be edited in juno to retrieve in devstack.
I didn't use openstack Kilo. In juno, if your hypervisor is libvirt, Vsphere or XenAPI then you can retrive this statistics in devstack UI. for this you have to do this:
For Libvirt
In this location ceilometer/compute/virt/libvirt/inspector.py, add this:
from oslo.utils import units
from ceilometer.compute.pollsters import util
def inspect_memory_usage(self, instance, duration=None):
instance_name = util.instance_name(instance)
domain = self._lookup_by_name(instance_name)
state = domain.info()[0]
if state == libvirt.VIR_DOMAIN_SHUTOFF:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'domain is in state of SHUTOFF'),
{'instance_name': instance_name})
return
try:
memory_stats = domain.memoryStats()
if (memory_stats and
memory_stats.get('available') and
memory_stats.get('unused')):
memory_used = (memory_stats.get('available') -
memory_stats.get('unused'))
# Stat provided from libvirt is in KB, converting it to MB.
memory_used = memory_used / units.Ki
return virt_inspector.MemoryUsageStats(usage=memory_used)
else:
LOG.warn(_('Failed to inspect memory usage of '
'%(instance_name)s, can not get info from libvirt'),
{'instance_name': instance_name})
# memoryStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
except libvirt.libvirtError as e:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'can not get info from libvirt: %(error)s'),
{'instance_name': instance_name, 'error': e})
for more details you can check the following link:
https://review.openstack.org/#/c/90498/