I have a db.zip archive (with .cpm, .pcl, .irs,.sbt, .wal, ... files) I like to restore
create database plocal:mydb admin admin
restore database /path/to/db.zip
Restoring database database /path/to/db.zip...
- Uncompressing file $FILE1_IN_ARCHIVE [...snip...]
- [...snip...]
Database restored in 0.19 seconds
To me, it seems as the restore was successful. However, whatever next command I use (e.g. select * from V) I get the following error:
Error: com.orientechnologies.orient.core.exception.ODatabaseException: Database 'mydb' is closed
Am I doing something wrong? Why is the DB closed? How could I open it?
I tried your case with OrientDB version 2.1.11 following these steps (previously I created a backup copy of a DB with the new name mydb.zip).
Create the new DB:
create database plocal:/path/to/db/newDB admin admin
Creating database [plocal:/path/to/db/newDB] using the storage type [plocal]...
Database created successfully.
Restore mydb.zip:
restore database C:/path/to/db/mydb.zip
Restoring database database C:/path/to/db/mydb.zip...
...
Database restored in 0,29 seconds
Select all vertices from V (I get your same exception):
orientdb {db=newDB}> select * from V
Error: com.orientechnologies.orient.core.exception.ODatabaseException: Database 'newDB'
is closed
This (your) problem seems to be related to this issue where it's explained that the access to a DB in plocal mode with another running OrientDB instance generates a conflict.
If you shutdown the opened OrientDB instance and try to connect to the DB in plocal mode, you'll be able to access to the DB.
Shutdown the running OrientDB instance and re-connect in plocal mode:
orientdb> connect plocal:/path/to/db/newDB admin admin
Connecting to database [plocal:/path/to/db/newDB] with user 'admin'...OK
Select again all vertices from V:
orientdb {db=newDB}> select * from V
----+-----+-------+------+--------
# |#RID |#CLASS |name |category
----+-----+-------+------+--------
0 |#12:0|Station|First |#13:0
1 |#12:1|Station|Second|#13:1
2 |#12:2|Station|Third |#13:2
----+-----+-------+------+--------
Anyway, I tried also with the new OrientDB 2.2.0 beta and, with this version, this behaviour doesn't happen:
Restoring database 'database C:/path/to/db/mydb.zip' from full backup...
...
Database restored in 0,75 seconds
orientdb {db=newDB}> select * from V
----+-----+-------+------+--------
# |#RID |#CLASS |name |category
----+-----+-------+------+--------
0 |#12:0|Station|First |#13:0
1 |#12:1|Station|Second|#13:1
2 |#12:2|Station|Third |#13:2
----+-----+-------+------+--------
Hope it helps
Related
I am new to Siddhi, and I am trying to connect to MongoDB Atlas to make an insertion to the database of a collection, but when I configure the parameters and run the code in siddhi editor, it seems that there is no error in the console but it does not add the record to MongoDB.
Here is the code:
#App:name("ConectionMongoDBAtlas")
#App:description("Description of conection to MongoDB Atlas")
#sink(type='mongodb',
-- mongodb.uri='mongodb://username:password#ac-qe2xpea-shard-00-00.cs3wyqb.mongodb.net:27017,ac-qe2xpea-shard-00-01.cs3wyqb.mongodb.net:27017,ac-qe2xpea-shard-00-02.cs3wyqb.mongodb.net:27017/siddhi?ssl=true&replicaSet=atlas-4drk5v-shard-0&authSource=admin&retryWrites=true&w=majority',
uri='mongodb+srv://username:password#cluster0.cs3wyqb.mongodb.net/siddhi?retryWrites=true&w=majority',
collection.name = 'siddhiCollection',
database.name = 'siddhi'
-- secure.connection = 'true',
-- trust.store = 'C:/Users/luis.ortega/Downloads/siddhi-tooling-5.1.0/resources/security/client-truststore.jks',
-- key.store.password = 'mongodb',
-- sslEnabled = 'true',
-- trustStore = 'C:/Users/luis.ortega/Downloads/siddhi-tooling-5.1.0/resources/security/cloud.mongodb2',
-- keyStorePassword = 'mongodb',
-- #map(type='json')
-- #payload('{"name":"{{name}}", "age":{{age}}}')
)
#primaryKey("name")
#index('age')
define table siddhiCollection(name string, age int);
#sink(type = 'log')
define stream BarStream(message string);
#info(name= 'query1')
define stream InsertStream (name string, age int);
from InsertStream
insert into MongoCollection;
I tried to configure mongodb with the store annotation as it is in the documentation and also with the sink annotation.
We don't know if the database SSL certificate issue is a problem, I even added the certificate to the client-trutstore.jks.
I have tried the connection to the MongoDB (but not for the Atlas) with the following steps and it was successful.
Copy the mongo driver[1] to the /lib directory.
Create a database as 'test' and add the user admin to the database.
db.createUser({user: "admin", pwd: "admin", roles : [{role: "readWrite", db: "test"}]});
Then simulate the siddhi application using tooling UI after deploying the siddhi application[2].
[1] https://mvnrepository.com/artifact/org.mongodb/mongo-java-driver/3.4.2
[2]
#App:name("store-mongodb")
#Store(type="mongodb",mongodb.uri='mongodb://admin:admin#localhost:27017/test') #PrimaryKey("name") #Index("amount:1", "{background:true}") define table SweetProductionTable (name string, amount double);
/* Inserting event into the mongo store */ #info(name='query1') from insertSweetProductionStream insert into SweetProductionTable;
If there are connection issues to the database, there should be error logs in the carbon log file of the tooling. You can check the log file, 'carbon.log' from the location /wso2/server/logs. This is needed to be checked only if you have observed the logs in the console of the browser.
I am trying to use the external database log plugin in Moodle to copy over the standard log table into an external database for easier access to do some analytics work.
I activated the external db log and added all correct settings on the settings page. I clicked "test connection" and it connected successfully and returned the table column headers successfully. But if I click around and make some logs, they are visible in the standard log store but my external db table is still empty.
So I tried connecting to my external db locally in TablePlus using identical credentials as I put in the external db log store settings, and I could connect and write successfully.
Next I went into the live logs and picked standard logs, and they showed up just fine. Then I clicked on external db logs (nothing inside except for 2 manually entered rows of data), and got this error:
URL: https://ohsu.mrooms3.net/
Debug info: ERROR: syntax error at or near "{" LINE 1: SELECT COUNT('x') FROM {OpenLMSLog} WHERE courseid = $1 AND ... ^ SELECT COUNT('x') FROM {OpenLMSLog} WHERE courseid = $1 AND timecreated > $2 AND anonymous = $3 [array ( 0 => '1', 1 => 1604556625, 2 => 0, )] Error code: dmlreadexception
Stack trace:
* line 486 of /lib/dml/moodle_database.php: dml_read_exception thrown
* line 329 of /lib/dml/pgsql_native_moodle_database.php: call to moodle_database->query_end()
* line 920 of /lib/dml/pgsql_native_moodle_database.php: call to pgsql_native_moodle_database->query_end()
* line 1624 of /lib/dml/moodle_database.php: call to pgsql_native_moodle_database->get_records_sql()
* line 1697 of /lib/dml/moodle_database.php: call to moodle_database->get_record_sql()
* line 1912 of /lib/dml/moodle_database.php: call to moodle_database->get_field_sql()
* line 1895 of /lib/dml/moodle_database.php: call to moodle_database->count_records_sql()
* line 262 of /admin/tool/log/store/database/classes/log/store.php: call to moodle_database->count_records_select()
* line 329 of /report/loglive/classes/table_log.php: call to logstore_database\log\store->get_events_select_count()
* line 48 of /report/loglive/classes/table_log_ajax.php: call to report_loglive_table_log->query_db()
* line 59 of /report/loglive/classes/renderer_ajax.php: call to report_loglive_table_log_ajax->out()
* line 462 of /lib/outputrenderers.php: call to report_loglive_renderer_ajax->render_report_loglive()
* line 53 of /report/loglive/loglive_ajax.php: call to plugin_renderer_base->render()
This is the only error message I get even after turning on debugging in the developer settings. My goal is to successfully configure an external db to track logs, but due to a lack of error messages when testing the connection it is hard to debug.
Environment configuration: Open LMS 3.8 MP2 (Build: 20201008)
The external db is a postgres db so we set it on the postgres driver.
It looks like the SQL used by the external db log store plugin is not compatible with Postgres, despite using the Postgres driver in the external db log store plugin settings, as shown by the '{' error in the SQL shown in the question. See here for documentation.
To fix this, we used MariaDB instead of Postgres, and we ended up getting the external db log store working.
Another note, you have to match your columns and data types exactly to the schema in the Moodle db, and then you have to set the ID column of your db table to auto-increment. If you don't know what the schema looks like, there is an Admin SQL interface under "Reports" that lets you run SQL. On the side, there a button to view the schema for any table in the db.
I recently started working with Robot framework. So I had a requirement where I needed to connect with Postgres db.
So though I am able to connect with the db but then when I try to execute queries, the flow is getting stuck. Even the test is not failing. Following is what I did:
Connect To Database psycopg2 ${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}
${current_row_count} = Row Count Select * from xyz
The first statement is executing fine but then it gets stuck on second statement.
Can somebody help me out on this
To Execute Query and get data from result :
Connect To Database psycopg2 ${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}
${output} = Query SELECT * from xyz;
Log ${output}
${DataResults}= Get from list ${output} 0
${DataResults}= Convert to list ${DataResults}
${DataResults}= Get from list ${DataResults} 0
${DataResults} convert to string ${DataResults}
Disconnect From Database
You are not executing your query.... read below a bit documentation and an example ;)
In the example you can see example variable but introduce your data ;)
Name: Connect To Database Using Custom Params
Source: DatabaseLibrary
Arguments:
[ dbapiModuleName=None | db_connect_string= ]
Loads the DB API 2.0 module given dbapiModuleName then uses it to connect to the database using the map string db_custom_param_string.
Example usage Example usage: :
Connect To Database Using Custom Params pymssql database='${db_database}' , user='${db_user}', password='${db_password}', host='${db_host}'
${queryResults} Query ${query}
Disconnect From Database
I created a vertex of type Message in VehicleHistoryGraph database and loaded 50,000 vertices of this type to it.
When I tried to delete all the vertices at once using this SQL statement -
DELETE VERTEX MESSAGE
I received unexpected errors saying that some of the vertices had already been deleted (but I, on the other hand, did not delete any vertices after loading) and the vertices would not delete all at once as expected (see below).
orientdb> connect remote:localhost/databases/VehicleHistoryGraph admin admin
Connecting to database [remote:localhost/databases/VehicleHistoryGraph] with user 'admin'...OK
orientdb {db=VehicleHistoryGraph}> DELETE VERTEX MESSAGE
Error: com.orientechnologies.orient.core.exception.OCommandExecutionException: Error on execution of command: sql.select from Message
Error: java.lang.IllegalStateException: The elements #26:38028 has already been deleted
Error: com.orientechnologies.orient.core.exception.ORecordNotFoundException: The record with id '#26:38028' not found
Error: com.orientechnologies.orient.core.exception.ORecordNotFoundException: The record with id '#26:38028' not found
orientdb {db=VehicleHistoryGraph}> SELECT COUNT(#rid) FROM Message
----+------+-----
# |#CLASS|COUNT
----+------+-----
0 |null |13546
----+------+-----
1 item(s) found. Query executed in 1.538 sec(s).
orientdb {db=VehicleHistoryGraph}> DELETE VERTEX MESSAGE
Delete record(s) '11896' in 107.861000 sec(s).
orientdb {db=VehicleHistoryGraph}> SELECT COUNT(#rid) FROM Message
----+------+-----
# |#CLASS|COUNT
----+------+-----
0 |null |1820
----+------+-----
1 item(s) found. Query executed in 0.167 sec(s).
orientdb {db=VehicleHistoryGraph}> DELETE VERTEX MESSAGE
Delete record(s) '1820' in 6.320000 sec(s).
orientdb {db=VehicleHistoryGraph}>
What went wrong? why? Is it a bug?
Is the problem still present? If you try with recent version (from 2.2.x) there are 2 useful commands:
CHECK DATABASE (Checks the integrity of a database. In the case the database contains graphs, their consistency is checked)
REPAIR DATABASE (to repair the DB)
After mongodump, I did mongorestore which seemed to work fine
heathers-air:db heathercohen$ mongorestore -v -host localhost:27017
2015-02-06T11:22:40.027-0800 creating new connection to:localhost:27017
2015-02-06T11:22:40.028-0800 [ConnectBG] BackgroundJob starting: ConnectBG
2015-02-06T11:22:40.028-0800 connected to server localhost:27017 (127.0.0.1)
2015-02-06T11:22:40.028-0800 connected connection!
connected to: localhost:27017
2015-02-06T11:22:40.030-0800 dump/langs.bson
2015-02-06T11:22:40.030-0800 going into namespace [dump.langs]
Restoring to dump.langs without dropping. Restored data will be inserted without raising errors; check your server log
file dump/langs.bson empty, skipping
2015-02-06T11:22:40.030-0800 Creating index: { key: { _id: 1 }, name: "_id_", ns: "dump.langs" }
2015-02-06T11:22:40.031-0800 dump/tweets.bson
2015-02-06T11:22:40.031-0800 going into namespace [dump.tweets]
Restoring to dump.tweets without dropping. Restored data will be inserted without raising errors; check your server log
file size: 4877899
30597 objects found
2015-02-06T11:22:41.883-0800 Creating index: { key: { _id: 1 }, name: "_id_", ns: "dump.tweets" }
When I try to access the data though, it's still empty and the way it looked before restore:
> show dbs
admin (empty)
dump 0.078GB
local 0.078GB
tweets (empty)
twitter (empty)
It says it found 30597 objects, where did they go?
They went into the dump database, and then into the collections dump.tweets and dump.langs. The fact that the files are contained in the folder dump means that mongorestore thinks they should be restored to the database dump (it is inferred from the path). The verbose output even explicitly states that the data is being placed into dump.langs and dump.tweets specifically.
If you specify the database you wish to restore to (with -d) and restore the specific files you will be able to restore the documents to the database you desire. Or, you can simply have a look in the dump database by running:
use dump;
db.tweets.find();
db.langs.find();