I want to clone the shell of a Snowflake database (metadata only, no data). Is this possible? I have checked the documentation and haven't found a solution.
If you're just trying to get an empty shell of an existing database, you could clone the entire database, and then script out a truncate of all of the tables that exist in the database. The clone doesn't add any data, and the truncate would be very quick on the clone (while not affecting the original tables).
Reached here looking for an answer to a similar question.
This is the alternative that i used:
CREATE TABLE TRGT_DB.TRGT_SCH.MY_TABLE AS SELECT * FROM SRC_DB.SRC_SCH.MY_TABLE WHERE 1=2;
Related
I am getting the ERROR: cannot execute TRUNCATE TABLE in a read-only transaction in Heroku PostgreSQL. How could I fix it?
I am trying to TRUNCATE a table.
I am using the Heroku Postgres.
I have tried to figure out in the UI how I could change my permissions or something similar to be allowed to run not only the read-only transactions. But with no success.
This is currently possible, you have to set the transaction to "READ WRITE" when creating a data clip. Here is an example:
begin;
set transaction read write;
Delete FROM mytable where id > 2130;
COMMIT;
The feature you're looking at (Heroku Dataclips docs here) is intentionally made read-only. Its a reporting tool, not a database management tool. The express purpose is to allow surfacing data to a wider group of people associated with a project without the risk of someone accidentally (or otherwise) deleting or modifying data improperly. There is no way to make Dataclips read-write.
If you want full control to delete/modify data you'll need to use an appropriate interface, psql or pgAdmin if you prefer a GUI.
I had the same problem, but the error was solved by adding the following:
begin; set transaction read write;
(Without COMMIT;)
I don't get errors anymore, but I can't see my new table in the Schema Explorer. Do you know why?
I have searched for the answer on following question for a long time and didn't find anything except of restoring several tables. I have a directory based dump of postgresql database, what I want to do is perform restore of this database based on some data condition like: WHERE [SomeField] > 10. Is it possible? And if yes, could you please advice me how to do this? Thank you
So, I am relatively new to MySQL and recently, I was asked to create a query that utilizes the BACKUP command in order to copy a table to a given destination folder. I was provided text from an SQL tutorial in w3schools.com, however, when I attempted to follow the format, I was informed that "BACKUP is not valid at this position, expecting: EOF, BEGIN, CATCH, CHECKSUM, COMMIT, DEALLOCATE,..". So, I was wondering, what is the proper syntax for using the BACKUP command in a query?
I have attempted several actions in order to resolve the issue, but none of them were successful. I have tried;
1# Executing a query with and without the underlying table saved in a file folder.
2# Using BACKUP for a database in case the problem was with tables.
3# Starting with BEGIN, DO, and mysqldump.
4# Removing TABLE.
5# Adding an opening parenthesis after the name of the table and a closing parenthesis after the name of the destination.
I do not feel comfortable sharing my own table and destination folder, but here is what I was supposed to use for reference. My code follows the same format;
What I was supposed to use for reference
BACKUP DATABASE Is not part of MySQL syntax. I believe you may be thinking of the SQL Server statement.
For MySQL you will likely want to use the mysqldump utility (which is a separate concept from SQL queries). Or possibly some solution involving the SELECT ... INTO OUTFILE variant of the SELECT...INTO statement.
I'm trying to migrate our database engine from MsSql to PostgreSQL. In our automated test, we restore the database back to "clean" state at the start of every test. We do this by comparing the "diff" between the working copy of the database with the clean copy (table by table). Then copying over any records that have changed. Or deleting any records that have been added. So far this strategy seems to be the best way to go about for us because per test, not a lot of data is changed, and the size of the database is not very big.
Now I'm looking for a way to essentially do the same thing but with PostgreSQL. I'm considering doing the exact same thing with PostgreSQL. But before doing so, I was wondering if anyone else has done something similar and what method you used to restore data in your automated tests.
On a side note - I considered using MsSql's snapshot or backup/restore strategy. The main problem with these methods is that I have to re-establish the db connection from the app after every test, which is not possible at the moment.
If you're okay with some extra storage, and if you (like me) are particularly not interested in re-inventing the wheel in terms of checking for diffs via your own code, you should try creating a new DB (per run) via templates feature of createdb command (or CREATE DATABASE statement) in PostgreSQL.
So for e.g.
(from bash) createdb todayDB -T snapshotDB
or
(from psql) CREATE DATABASE todayDB TEMPLATE snaptshotDB;
Pros:
In theory, always exact same DB by design (no custom logic)
Replication is a file-transfer (not DB restore). So far less time taken (i.e. doesn't run SQL again, doesn't recreate indexes / restore tables etc.)
Cons:
Takes 2x the disk space (although template could be on a low performance NFS etc)
For my specific situation. I decided to go back to the original solution. Which is to compare the "working" copy of the database with "clean" copy of the database.
There are 3 types of changes.
For INSERT records - find max(id) from clean table and delete any record on working table that has higher ID
For UPDATE or DELETE records - find all records in clean table EXCEPT records found in working table. Then UPSERT those records into working table.
We were doing an in house tool with data base as PostgreSQL 9.1. Accidentally running a delete script we lost the data in three tables.
We didn't have a backup. :(
A try on the manuals, it didn't help. A quick look at the data files in /PostgreSQL/9.1/data/base/ , we found that data is not deleted ( atleast not completely )
Is there a way to recover this data ?
Thanks Daniel, the directions were useful.
And luckily we found a tool to do the same. Find the tool and instructions in the below link
pg_dirtyread
The instructions provided were simple and accurate.
Additionally what we had to do was.
Create temporary tables for restoration
Instead of the SELECT statement in the instructions, used INSERT statements to insert into the backup tables.
Filter the corrupted data ( manually )
Insert back to the original tables.
There were corrupted entries, but not many, as we could stop the running service immediately and avoid any updates to those tables.
Thanks to OmniTI Labs team. This tool saved us the day ( night :) ) .