Start OrientDB without user input - orientdb

I'm attempting to start OrientDB in distributed mode on AWS.
I have an auto scaling group that creates new nodes as needed. When the nodes are created, they start with a default config without a node name. The idea is that the node name is generated randomly.
My problem is that the server starts up and ask for user input.
+---------------------------------------------------------------+
| WARNING: FIRST DISTRIBUTED RUN CONFIGURATION |
+---------------------------------------------------------------+
| This is the first time that the server is running as |
| distributed. Please type the name you want to assign to the |
| current server node. |
| |
| To avoid this message set the environment variable or JVM |
| setting ORIENTDB_NODE_NAME to the server node name to use. |
+---------------------------------------------------------------+
Node name [BLANK=auto generate it]:
I don't want to set the node name because I need a random name and the server never starts because it's waiting for user input.
Is there a parameter I can pass to dserver.sh that will pass this check and generate a random node name?

You could create a random string to pass to OrientDB as node name with the ORIENTDB_NODE_NAME variable. Example:
ORIENTDB_NODE_NAME=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
For more information about this, look at: https://gist.github.com/earthgecko/3089509

Related

Why do certain psql commands from terminal work for local database and not for hosted database?

I have imported a local PostgreSQL database to a managed cluster on Digital Ocean. It will be used with a Python app that will also be hosted on Digital Ocean. I used pg_dump and pg_restore to achieve the import. Now, to make sure the import was successful, I am running some psql queries and commands via my MacOS terminal app that is set up with zsh and it connects via a shell script that prompts me for host, database name, port, user and password. I am successful in connecting to the managed cluster this way, and I can execute some queries with no problem, while others are causing errors. For example:
my_support=> \dt
List of relations
Schema | Name | Type | Owner
--------+----------------------+-------+---------
public | ages | table | doadmin
public | articles | table | doadmin
public | challenges | table | doadmin
public | cities | table | doadmin
public | comments | table | doadmin
public | messages | table | doadmin
public | relationships | table | doadmin
public | topics | table | doadmin
public | users | table | doadmin
(9 rows)
my_support=> \dt+
sh: more: command not found
my_support=>
Also:
my_support=> SELECT id,sender_id FROM messages;
id | sender_id
----+-----------
1 | 1
2 | 2
3 | 4
4 | 1
5 | 2
(5 rows)
my_support=> SELECT * FROM messages;
sh: more: command not found
my_support=>
So the terminal app seems to dislike certain characters, such as * and +, but I can't find any documentation that tells me I should escape them, or how. I tried backslash in front of them, but it did not work. And what's more confusing is that these very same queries are successful when I connected to my LOCAL copy of the database, using the very same terminal app, launched from the very same shell script.
In case it's helpful, here's what I see in the CLI when I connect:
psql (14.1, server 14.2)
SSL connection (protocol: TLSv1.3, cipher: <alphanumeric string here>, bits: 256, compression: off)
Type "help" for help.
my_support=>
Does it matter that my local PostgreSQL version is 14.1 and the server is 14.2? I'm assuming the "server" refers to the hosted environment, but it seems like something as basic as "SELECT * FROM" should not be version-dependent.
Ultimately what matters is whether my Python app (which uses psycopg library to talk to PostgreSQL) can run those queries, and I haven't test that yet. But it sure would be handy to test things on the managed cluster using my local terminal app.
BTW, I have an open ticket with Digital Ocean to answer this question, but I find SO to be faster and more helpful in most cases.
psql is trying to use a pager to display results that are longer than the number of lines in the terminal. The error message
more: command not found
indicates that the pager (more) it tries to use is not available. You can turn off the use of a pager:
\pset pager off
or set a different command to be used as the pager. See the manual for details

reading from configuration database inside of data factory

I need to create a data factory pipeline to move data from sftp to blob storage. At this point, I'm only doing a POC, and I'd like know how I would read configuration settings and kick off my pipeline based on those configuration settings.
Example of config settings would be (note that there would be around 1000 of these):
+--------------------+-----------+-----------+------------------------+----------------+
| sftp server | sftp user | sftp pass | blob dest | interval |
+--------------------+-----------+-----------+------------------------+----------------+
| sftp.microsoft.com | alex | myPass | /myContainer/destroot1 | every 12 hours |
+--------------------+-----------+-----------+------------------------+----------------+
How do you kick off a pipeline using some external configuration file/store?
Take a look at Lookup activity and linked service Parameterize

SAS DIP Service failing to run

SAS Service "SAS [SASConfig-Lev1] Distributed In-Process Scheduler command-line job runner" is failing to run on Win2012 R2 server.
Its set to Automatic, failed to run on startup and fails now as I try to start it.
Only dependency is the SAS Metadata Server and that is running fine.
In the log at \Lev1\Web\Applications\SASWIPSchedulingServices9.4\dip\serviceLog, the entry reads:
STATUS | wrapper | 2017/08/29 16:51:51 | --> Wrapper Started as Service
STATUS | wrapper | 2017/08/29 16:51:51 | Launching a JVM...
FATAL | wrapper | 2017/08/29 16:51:51 | Unable to execute Java command. The system cannot find the file specified. (0x2)
FATAL | wrapper | 2017/08/29 16:51:51 | "\bin\java.exe" -Djava.system.class.loader=com.sas.app.AppClassLoader -Dsas.app.repository.path="D:\SAS\SASVersionedJarRepository\eclipse" -Dsas.app.launch.picklist="D:\SASConfig\Lev1\Web\Applications\SASWIPSchedulingServices9.4\dip/picklist" -Xmx128m -Dsas.cache.locators=rad1sas1.hps-rad.local[41415] -Dspring.profiles.active=client-locators -Dsas.gemfire.log-level=severe -Dsas.gemfire.log.file= -Djava.library.path="D:\SASConfig\Lev1\Web\Applications\SASWIPSchedulingServices9.4\dip" -classpath "D:\SAS\SASVersionedJarRepository\eclipse\plugins\JavaServiceWrapper_3.2.3\wrapper.jar;D:\SAS\SASVersionedJarRepository\eclipse\plugins\sas.launcher.jar" -Dwrapper.key="eknAd40L52PNah3_" -Dwrapper.port=32006 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=14260 -Dwrapper.version="3.2.3" -Dwrapper.native_library="wrapper" -Dwrapper.service="TRUE" -Dwrapper.cpu.timeout="10" -Dwrapper.jvmid=1 com.sas.scheduler.api.servers.ip.engine.mq.client.JobRunnerService "D:\SASConfig\Lev1\Web\Applications\SASWIPSchedulingServices9.4\dip/DIPJobRunner.properties"
FATAL | wrapper | 2017/08/29 16:51:51 | Critical error: wait for JVM process failed
It seems DIP job uses a configuration file sitting at SASHOME
D:\SAS\wrapper.conf
As #DomPazz pointed out the java path was incomplete while assigned to key. I
included the full path and that solved the issue. Strangely, the First time I modified and restarted the box it got overwritten by a backup of the file sitting somewhere.
Contents of the wrapper.conf :
# Java Application
# In Error state the key below had the value of "\bin\java.exe"
wrapper.java.command=D:\SAS\SASPrivateJavaRuntimeEnvironment\9.4\jre\bin\java.exe
# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
wrapper.java.classpath.1=D:\SAS\SASVersionedJarRepository\eclipse\plugins\JavaServiceWrapper_3.2.3\wrapper.jar
wrapper.java.classpath.2=D:\SAS\SASVersionedJarRepository\eclipse\plugins\sas.launcher.jar
# Java Additional Parameters
wrapper.java.additional.1=-Djava.system.class.loader=com.sas.app.AppClassLoader
wrapper.java.additional.2=-Dsas.app.repository.path="D:\SAS\SASVersionedJarRepository\eclipse"
Note : Another wrapper.conf sits in D:\SASConfig\Lev1\Web\Applications\SASWIPSchedulingServices9.4\dip but that seems have properties for the Windows service!

How to debug "Sugar CRM X Files May Only Be Used With A Sugar CRM Y Database."

Sometimes one gets a message like:
Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.
I am wondering how Sugar determines what version of the database it is using. In the above case, I get the following output:
select * from config where name='sugar_version';
+----------+---------------+-------+
| category | name | value |
+----------+---------------+-------+
| info | sugar_version | 6.4.5 |
+----------+---------------+-------+
1 row in set (0.00 sec)
cat config.php |grep sugar_version
'sugar_version' => '6.4.5',
Given the above output, I am wondering how to debug the output "Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.": Sugar seems to think the files are not of version 6.4.5 even though the sugar_version is 6.4.5 in config.php; where should I look next?
Two options for the issue:
Option 1: Update your database for the latest version.
Option 2: Follow the steps below and change the SugarCRM cnfig version.
mysql> select * from config where name ='sugar_version';
+----------+---------------+---------+----------+
| category | name | value | platform |
+----------+---------------+---------+----------+
| info | sugar_version | 7.7.0.0 | NULL |
+----------+---------------+---------+----------+
1 row in set (0.00 sec)
Update your sugarcrm version to apporipriate :
mysql> update config set value='7.7.1.1' where name ='sugar_version';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
The above commands seem to be correct. Sugar seems to check that config.php and the config table in the database contain the same version. In my case I was making the mistake of using the wrong database -- so if you're like me and tend to have your databases mixed up, double check in config.php that 'dbconfig' is indeed pointing to the right database.

OrientDB: Cannot find a command executor for the command request: sql.MOVE VERTEX

I am using orientdb community edition 1.7.9 on mac osx.
Database Info:
DISTRIBUTED CONFIGURATION: none (OrientDB is running in standalone
mode)
DATABASE PROPERTIES
NAME | VALUE|
Name | null |
Version | 9 |
Date format | yyyy-MM-dd |
Datetime format | yyyy-MM-dd HH:mm:ss |
Timezone | Asia/xxxx |
Locale Country | US |
Locale Language | en |
Charset | UTF-8 |
Schema RID | #0:1 |
Index Manager RID | #0:2 |
Dictionary RID | null |
Command flow:
create cluster xyz physical default default append
alter class me add cluster xyz
move vertex #1:2 to cluster:xyz
Studio UI throw the following error:
014-10-22 14:59:33:043 SEVE Internal server error:
com.orientechnologies.orient.core.command.OCommandExecutorNotFoundException:
Cannot find a command executor for the command request: sql.MOVE
VERTEX #1:2 TO CLUSTER:xyz [ONetworkProtocolHttpDb]
Console return a record as select does. I do not see error in the log.
I am planning a critical feature by using altering cluster for selected records.
Could anyone help on this regard?
Thanks in advance.
Cheers
move vertex command is not supported in 1.7.x
you have to use switch to 2.0-M2
The OrientDB Console is a Java Application made to work against OrientDB databases and Server instances.
more