OrientDB: Cannot find a command executor for the command request: sql.MOVE VERTEX - orientdb

I am using orientdb community edition 1.7.9 on mac osx.
Database Info:
DISTRIBUTED CONFIGURATION: none (OrientDB is running in standalone
mode)
DATABASE PROPERTIES
NAME | VALUE|
Name | null |
Version | 9 |
Date format | yyyy-MM-dd |
Datetime format | yyyy-MM-dd HH:mm:ss |
Timezone | Asia/xxxx |
Locale Country | US |
Locale Language | en |
Charset | UTF-8 |
Schema RID | #0:1 |
Index Manager RID | #0:2 |
Dictionary RID | null |
Command flow:
create cluster xyz physical default default append
alter class me add cluster xyz
move vertex #1:2 to cluster:xyz
Studio UI throw the following error:
014-10-22 14:59:33:043 SEVE Internal server error:
com.orientechnologies.orient.core.command.OCommandExecutorNotFoundException:
Cannot find a command executor for the command request: sql.MOVE
VERTEX #1:2 TO CLUSTER:xyz [ONetworkProtocolHttpDb]
Console return a record as select does. I do not see error in the log.
I am planning a critical feature by using altering cluster for selected records.
Could anyone help on this regard?
Thanks in advance.
Cheers

move vertex command is not supported in 1.7.x
you have to use switch to 2.0-M2

The OrientDB Console is a Java Application made to work against OrientDB databases and Server instances.
more

Related

Why do certain psql commands from terminal work for local database and not for hosted database?

I have imported a local PostgreSQL database to a managed cluster on Digital Ocean. It will be used with a Python app that will also be hosted on Digital Ocean. I used pg_dump and pg_restore to achieve the import. Now, to make sure the import was successful, I am running some psql queries and commands via my MacOS terminal app that is set up with zsh and it connects via a shell script that prompts me for host, database name, port, user and password. I am successful in connecting to the managed cluster this way, and I can execute some queries with no problem, while others are causing errors. For example:
my_support=> \dt
List of relations
Schema | Name | Type | Owner
--------+----------------------+-------+---------
public | ages | table | doadmin
public | articles | table | doadmin
public | challenges | table | doadmin
public | cities | table | doadmin
public | comments | table | doadmin
public | messages | table | doadmin
public | relationships | table | doadmin
public | topics | table | doadmin
public | users | table | doadmin
(9 rows)
my_support=> \dt+
sh: more: command not found
my_support=>
Also:
my_support=> SELECT id,sender_id FROM messages;
id | sender_id
----+-----------
1 | 1
2 | 2
3 | 4
4 | 1
5 | 2
(5 rows)
my_support=> SELECT * FROM messages;
sh: more: command not found
my_support=>
So the terminal app seems to dislike certain characters, such as * and +, but I can't find any documentation that tells me I should escape them, or how. I tried backslash in front of them, but it did not work. And what's more confusing is that these very same queries are successful when I connected to my LOCAL copy of the database, using the very same terminal app, launched from the very same shell script.
In case it's helpful, here's what I see in the CLI when I connect:
psql (14.1, server 14.2)
SSL connection (protocol: TLSv1.3, cipher: <alphanumeric string here>, bits: 256, compression: off)
Type "help" for help.
my_support=>
Does it matter that my local PostgreSQL version is 14.1 and the server is 14.2? I'm assuming the "server" refers to the hosted environment, but it seems like something as basic as "SELECT * FROM" should not be version-dependent.
Ultimately what matters is whether my Python app (which uses psycopg library to talk to PostgreSQL) can run those queries, and I haven't test that yet. But it sure would be handy to test things on the managed cluster using my local terminal app.
BTW, I have an open ticket with Digital Ocean to answer this question, but I find SO to be faster and more helpful in most cases.
psql is trying to use a pager to display results that are longer than the number of lines in the terminal. The error message
more: command not found
indicates that the pager (more) it tries to use is not available. You can turn off the use of a pager:
\pset pager off
or set a different command to be used as the pager. See the manual for details

How to use extensions in Postgres installed with Homebrew on Mac OSX

I'd like to use the SPI extension in Postgres 10.2, which I installed with Homebrew. However,
CREATE EXTENSION spi;
fails with
ERROR: could not open extension control file "/usr/local/share/postgresql/extension/spi.control": No such file or directory
Looking inside that extension directory, I see many extensions, but not SPI. The Postgres documentation mentions that extensions would reside in a contrib directory of the distribution and that you can then build them individually, but I can't seem to find this directory anywhere. Any idea how I can obtain and install the SPI module?
https://www.postgresql.org/docs/current/static/contrib-spi.html
Each of the groups of functions described below is provided as a
separately-installable extension.
so you check and try:
t=# select * from pg_available_extensions where name in ('refint','timetravel','autoinc','insert_username','moddatetime');
name | default_version | installed_version | comment
-----------------+-----------------+-------------------+-------------------------------------------------------------
moddatetime | 1.0 | | functions for tracking last modification time
autoinc | 1.0 | | functions for autoincrementing fields
insert_username | 1.0 | | functions for tracking who changed a table
timetravel | 1.0 | | functions for implementing time travel
refint | 1.0 | | functions for implementing referential integrity (obsolete)
(5 rows)
t=# create extension refint ;
CREATE EXTENSION
t=# create extension timetravel;
CREATE EXTENSION
and so on...

How Jasperreports Server stores report output internally?

There are few ways to store report output in JR Server: FS, FTP and Repository. The repository output is the default one. I guess the files in the repository must be stored in the DB or file system. Are the files kept forever? How can I manage the repository and for example set a file's lifetime?
The repository outputs are stored in the database. Usually there is no need to set the lifetime.
As of JasperReports Server v 6.3.0 the reference to all resources is kept in jiresource table, while content of is kept in jiresource.
In my case I was able to retrieve all output reports with:
select r.id,r.name,r.creation_date
from jiresource r, jicontentresource c
where r.id = c.id;
The definition of jicontentresource is
jasperserver=# \d+ jicontentresource
id | bigint | not null | plain | |
data | bytea | | extended | |
file_type | character varying(20) | | extended | |

Start OrientDB without user input

I'm attempting to start OrientDB in distributed mode on AWS.
I have an auto scaling group that creates new nodes as needed. When the nodes are created, they start with a default config without a node name. The idea is that the node name is generated randomly.
My problem is that the server starts up and ask for user input.
+---------------------------------------------------------------+
| WARNING: FIRST DISTRIBUTED RUN CONFIGURATION |
+---------------------------------------------------------------+
| This is the first time that the server is running as |
| distributed. Please type the name you want to assign to the |
| current server node. |
| |
| To avoid this message set the environment variable or JVM |
| setting ORIENTDB_NODE_NAME to the server node name to use. |
+---------------------------------------------------------------+
Node name [BLANK=auto generate it]:
I don't want to set the node name because I need a random name and the server never starts because it's waiting for user input.
Is there a parameter I can pass to dserver.sh that will pass this check and generate a random node name?
You could create a random string to pass to OrientDB as node name with the ORIENTDB_NODE_NAME variable. Example:
ORIENTDB_NODE_NAME=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
For more information about this, look at: https://gist.github.com/earthgecko/3089509

Visualizing time-series from a SQL Database (Postgres)

I am building an app that applies a datascience model on a SQL Database, for sensor metrics. For this purpose I chose PipelineDB (based on Postgres) that enables me to build a Continuous View on my metrics and apply the model to each new line.
For now, I just want to observe the metrics I collect through the sensor on a dashboard. The table "metrics" looks like this :
+---------------------+--------+---------+------+-----+
| timestamp | T (°C) | P (bar) | n | ... |
+---------------------+--------+---------+------+-----+
| 2015-12-12 20:00:00 | 20 | 1.13 | 0.9 | |
+---------------------+--------+---------+------+-----+
| 2015-12-13 20:00:00 | 20 | 1.132 | 0.9 | |
+---------------------+--------+---------+------+-----+
| 2015-12-14 20:00:00 | 40 | 1.131 | 0.96 | |
+---------------------+--------+---------+------+-----+
I'd like to build a dashboard in which I could see all my metric evolving through time. Even be able to choose which column to display.
So I found a few tools that could match with my need, which are Grafana or Chronograf for InfluxDB.
But neither of them enable me to plug directly on Postgres and query my table to generate metric-formatted data that is required by these tools.
Do you have any advice on what I should do to use such dashboards with such data ?
A bit late here, but Grafana now supports Postgresql datasources directly: https://grafana.com/docs/features/datasources/postgres. I've used it in several projects and it has been really easy to set up and use.