Export a postgis table to geopackage using the database management in QGIS - postgresql

I am working in QGIS through the DB Manager by running some spatial queries. After I create new tables with the queries I need to export the tables as new geopackages.
I've tried using the Export to Vector file inside the DB Manager but I get the following error message:
Error 2 Creation of data source failed (OGR error:
sqlite3_open(/Users/xxx/Documents/xxx/xxx/xxx/xxxx/xx/new_geopackage_layer.gpkg)failed:
unable to open database file)
I've read a couple of posts and they said I needed to create an empty geopackage first and then export the table and save it inside the geopackage but that did not work either. When I try to save inside an existing geopackage
I get an error saying:
"geopackage.gpkg already exists. Do you want to replace it? A file or
folder with the same name already exists in the folder xxx Replacing
it will overwrite its current contents."
If I choose to overwrite then I get a second error message saying:
" Error 1 Unable to create the datasource.
/Users/xxx/Documents/xxx/xxx/xxx/xxx/xxxx/new_geopackage.gpkg exists
and overwrite flag is false."
All I want is to be able to run spatial queries inside QGIS and be able to export the tables created with the queries as geopackages.
It seems that as of now I won't be able to do this from inside QGIS but instead will need to use ogr2ogr command to export to any file type.
Any help would be really appreciated.
Thank you

Related

Birt not being able to find available fields Mongodb datasource dataset

I'm trying to connect a mongodb database to Birt and create a dataset. But after i connect (ping get succeeded) the database and trying to set up the dataset by specifying a collection name but following error comes
Unable to find available fields. Invalid collection name clinic.
Seems its an issue with the mongodb-java driver.
Open the Eclipse IDE and navigate to the plugins dir, delete the org.eclipse.orbit.mongodb_2.10.1.v20130422-1135.jar file, and add the mongo-java-driver-2.14.3.jar there.

localdb does not create database file if its missing

I use this connection-string:
Server=(localdb)\\MyInstance;Timeout=30;Database=MyDB;AttachDBFilename="C:\Temp\MyDB.mdf";Trusted_Connection=True;
Once I run a migration from my code using
dbContext.Database.Migrate();
Normally, this "simply" works. The db is not just migrated, it is also getting its file created for it.
However, on the device of a colleague, the same code results in this error message:
System.Data.SqlClient.SqlException: "Cannot attach the file 'C:\Temp\MyDB.mdf' as database 'MyDB'."
If I give my database files to my colleague and he places them in the appropiate directory first, everything works as expected and the other code in that program can access everything in it, as it would do normally.
We've tried different paths and always checked the file-system rights. LocalDB or entity-framework (I'm not sure which is normally responsible for creating database files) simply won't create the database-file if it's missing on his device.
Are there any switches causing this? Can I explicitly tell localdb with the connection-string that it should create the database file?

Insert data into Redshift from Windows txt files

I have 50 txt files on windows and I would like to insert their data into a single table on Redshift.
I created the basic table structure and now I'm having issues with inserting the data. I tried using COPY command from SQLWorkbench/J but it didn't work out.
Here's the command:
copy feed
from 'F:\Data\feed\feed1.txt'
credentials 'aws_access_key_id=<access>;aws_secret_access_key=<key>'
Here's the error:
-----------------------------------------------
error: CREDENTIALS argument is not supported when loading from file system
code: 8001
context:
query: 0
location: xen_load_unload.cpp:333
process: padbmaster [pid=1970]
-----------------------------------------------;
Upon removing the Credentials argument, here's the error I get:
[Amazon](500310) Invalid operation: LOAD source is not supported. (Hint: only S3 or DynamoDB or EMR based load is allowed);
I'm not a UNIX user so I don't really know how this should be done. Any help in this regard would be appreciated.
#patthebug is correct in that Redshift cannot see your local Windows drive. You must push the data into an S3 bucket. There are some additional sources you can use per http://docs.aws.amazon.com/redshift/latest/dg/t_Loading_tables_with_the_COPY_command.html, but they seem outside the context you're working with. I suggest you get a copy of Cloudberry Explorer (http://www.cloudberrylab.com/free-amazon-s3-explorer-cloudfront-IAM.aspx) which you can use to copy those files up to S3.

How to run a SQL Script that builds my DB and schema in OrientDB?

I'm using OrientDB 2.0.2. I'm writing a SQL script that will create a new DB, construct the schema, and then populate the DB with static initial data. I know that the database will be automatically created if I simply start loading data, and that classes will automatically be created if I start inserting records, but I want to make sure that my classes have the right inheritance and Properties and Indexes before I begin my data load.
The SQL syntax is easy - I just need to know how to run the SQL script from the command line in order to instantiate the the DB, as if I were deploying a new instance of my DB at a new location, or for a new customer.
Thanks.
Assuming you installed OrientDB in the base directory ORIENTDB_HOME, then go to the $ORIENTDB_HOME/bin directory. In this directory, execute the OrientDB SQL Script with the simple command:
*nux:
$ ./console.sh myscript.osql
Windows:
> console.bat myscript.osql
mysql.osql is a simple text file, containing all SQL command (.oslq is a typical file extension for OrientDB SQL scripts).
See documentation for details how to create a new database. Example:
CREATE DATABASE plocal:/usr/local/orient/databases/demo/demo
or
CREATE DATABASE remote:localhost/trick root verySecretPassword

Failed to import gs://bucket_name/Cloud.sql

I have stored everything needed for my database in phpmyadmin , and exported the my database from it. That was saved as Cloud.sql , so now this sql file I imported it to the Google Cloud Storage with the help of this link https://developers.google.com/cloud-sql/docs/import_export.
Now after importing the Contents of .sql using the Import option present in the action of the instance, it shows the green working sign , and after a while it stops, when I check in the Logs , it shows
Failed to import gs://bucket_name/Cloud.sql: An unknown problem occurred (ERROR_RDBMS)
So ,
I am unable to find out the reason behind the error as its not clear, and how can this be solved
Google Cloud Sql probably doesn't know to which database the gs://bucket_name/Cloud.sql commands apply.
From https://groups.google.com/forum/#!topic/google-cloud-sql-discuss/pFGe7LsbUaw:
The problem is the dump doesn't contain the name of the database to use. If you add a 'USE XXX' at the top the dump where XXX is the database you want to use I would expect the import to succeed.
I had a few issues that were spitting out the ERROR_RDBMS error.
It turns out that google actually does have more precise errors now, but you have to go here
https://console.cloud.google.com/sql/instances/{DATABASE_NAME}/operations
And you will see a description of why the operation failed.