Storage Manager in pgAdmin - postgresql

I am trying to backup one of my databases in PostgreSQL pgAdmin tool. I used this tutorial:
backup database with pgAdmin
After finishing that I want to have the file. In that tutorial it says that we can use the Storage Manager to download the backup file on the client machine. After that from this link I wanted to access the Storage Manager. It says that "You can access Storage Manager from the Tools Menu", but from my system there is not any option with that name:
What is the problem and how could I obtain the backup database file?

If you are not running pgAdmin4 in server mode, then there is no storage manager. The storage manager is only relevant when the computer from which you run the pgAdmin4 GUI is different from the computer where the pgAdmin4 app-server is running.
When you took the backup, you told it where to save the file although not in a very user-friendly way. It asks for a filename, and there are three dots you can click to browse for a directory into which to put the file. But if you don't avail yourself of the three dots, then you don't know where it is going to put the file, it just uses an apparently OS-dependent default and doesn't tell you what it is. I usually find in my "Documents" folder. (Well, I usually don't use pgAdmin4 in the first place as it makes everything harder than just using the command line is, but when I do use it...)

Related

How to backup postgres db hosted on cloud with pgadmin4?

I'm hosting my db using AWS RDS and I'm trying to backup tables. However once it's finished backing up, where is the downloaded on my computer?
Doesn't seem like theres a path to save the file
I've checked a couple of answers and others are having same issue
https://stackoverflow.com/a/29246636/11110509
The "Filename" element in that dialog box lets you pick a directory as well as file name. That is where it is. If you just typed in a filename without giving a path, then on Windows it is probably in your user's "Documents" folder.

Where is the DATA_DUMP_DIR in sql developer

I'm trying to import a .dmp file using the Data Pump Import tool in oracle sql developer.
I'm connected to an oracle database running in a container on my local machine.
When I get to the step where I specify where the dump file is to import, where should I place the .dmp file?
DATA_PUMP_DIR is a default Oracle directory object. It isn't part of SQL Developer; the import tool is really just giving you a GUI equivalent of running impdp from the command line.
You can find the operating system location that Oracle directory object points to by querying the data dictionary:
select directory_path from all_directories where directory_name = 'DATA_PUMP_DIR';
The path that returns is on the database server (in your case that'll be inside your container too), and your dump file needs to go there.
You might want to create additional directory objects pointing to other locations, and grant suitable privileges to users to be able to access them; but they all need to be on the DB server and read/writable by the Oracle process owner on that server.
(They could be remote filesystems mounted on the server, they don't necessarily have to be local storage, but that's another issue and more operating-system specific. Again, in your case, you might be able to share a folder on your local machine with the container, if you don't want to copy the file into the container.)

How do I view a PostgreSQL database on heroku with a GUI?

I have a rails app on heroku that is using a Postgre database. My database has > 40 tables and > 10,000 rows. I would like to delete a lot of data, but it would be much easier if I was able to view and interact with it in a GUI table. I can access my data in rails console, but it's taking too long.
pgweb is a great cross-platform GUI, and it's easy to connect to your Heroku Postgres when launching from the command line.
I installed via Homebrew on a Mac (brew install pgweb), but instructions for other platforms are listed on the site. Here's how I launch pgweb connected to a Heroku Postgres DB:
heroku config:get DATABASE_URL | xargs pgweb --url
And if you want to connect to your localhost:
pgweb --host localhost
I'm a little late here, but this may help someone else who stumbles across this thread...
If you go to your Heroku app's dashboard (through the website) > settings > "Reveal Config Vars" > DATABASE_URL, and paste that URL into the browser.
I use TablePlus for database management, when I paste the link into the browser it asks if it can open TablePlus and then I can edit my production database in real time just like I would in development.
I'm not sure what pasting the URL into the browser will do if you don't have TablePlus. I assume it will request to open any other SQL management app you might have.
As slumdog wrote in the comment to your question, you can use pgAdmin, which comes with your local Postgres installation.
This article explains how to connect your remote heroku db with pgAdmin, using heroku credentials: https://medium.com/#vapurrmaid/getting-started-with-heroku-postgres-and-pgadmin-run-on-part-2-90d9499ed8fb
From the article:
"pgAdmin is a GUI for postgresql databases that can be used to access and modify databases that not only exist locally, but also remotely. For a fresh install of pgAdmin, the dashboard likely contains only one server. This is your local server...
We have to configure a new remote server with its credentials.
right click server(s) > create > server …
Fill out the following:
Name: This is solely for you. Name it whatever you want, I chose ‘Heroku-Run — On’
Under the connection tab: hostname/address. If you go back to your datastores ‘reveal credentials’, this is the host credential. It should look like --**...amazonaws.com
Keep the port at 5432, unless your credentials list otherwise
Maintenance database — this is the database field in the credentials
Username — this is the user field in the credentials
Password — the password field in the credentials. I highly advise checking save password so that you don’t have to copypasta this every time you want to connect.
In the SSL tab, mark SSL mode as require
At this point, if we were to hit ‘save’ (please don’t), something very strange would happen. You’d see hundreds if not thousands of databases appear in pgAdmin. This has to do with how Heroku configures their servers. You’ll still only have access to your specific database, not those of others. In order to avoid parsing so many databases, we have to white list only those databases we care about.
go to the Advanced tab and under db restriction copy the database name (it’s the same value as the Maintenance database field filled earlier)."
Article contains other usefull guidelines and screenshots.
Try GUI of DBWeaver.
https://dbeaver.io/
Download it, after that you can connect your heroku postgres using Database Credentials data.
You can use Heroku's hosted DB viewer on the Overview pane of your dashboard:
Create and click the Dataclip:
Dataclip GUI is fairly easy to use, we can type and customize SQL queries at the top etc.

How to load data from S3 to PostgreSQL RDS

I have a need to load data from S3 to Postgres RDS (around 50-100 GB) I don't have the option to use AWS Data Pipeline and I am looking for something similar to using the COPY command to load data in S3 into Amazon Redshift.
I would appreciate any suggestions on how I can accomplish this.
Originally, this answer was trying to use the S3 to Postgres RDS Functionality. That whole enterprise failed (see below).
The way I have finally been able to do this is:
Set-up an EC2 instance with psql installed (see below near end of post)
Copy the relevant CSVs to import from S3 to the local instance
Use the psql /copy command to import the files up
This last part is really, really important. If you use the SQL COPY command the entire RDS Postgres role structure will frustrate you to no end. It has a wonky SUPERRDSADMIN role which is not very super at all. However, if you use the psql /copy commany you apparently can do anything. I have confirmed this be the case and have started my uploads succesfully. I will come back and re-edit this post (time permitting) to add relevant documentation steps for the above.
Caveat Emptor: The post below was all the original work I had done trying to get this implemented. I don't want to bury the lead despite multiple efforts (including what can only be described as pathetic tech support from AWS) I don't believe that this feature is ready for prime time. Despite a very simple test environment, easy to replicate, AWS has not provided an effective way to not get the copy statement to crap out as follows:
The actual call to aws_s3.table_import_from_s3(...) is reporting a permission problem between RDS and S3. From my research work with psql this appears to be a C library, probably installed by AWS.
NOTICE: CURL error code: 28 when attempting to validate pre-signed URL, 1 attempt(s) remaining
NOTICE: HINT: make sure your instance is able to connect with S3.
S3 to Postgres RDS Functionality Now Added
On 2019-04-24 AWS released functionality allowing a Postgres RDS to load directly from S3. You can read the announcement here, and see the documentation page here.
I am sharing with the OP because this appears to be the AWS supported way of solving the question posed.
Key summary points:
Requires Postgres 11.1 or greater
Need access to psql and the ability to connect it to the RDS instance
Need to install the aws_s3 extension which pulls in aws_commons.
You can get to the S3 bucket by specifying credentials or by assigning IAM roles to RDS
It advertises supporting all of the same data formats as the postgres COPY command
It currently only appears to support a single file at a time (ie no regex)
The instructions are fairly detailed and provide a variety of paths to configuring (AWS CLI scripts, Console instructions, etc). Additionally, the option to use your IAM keys rather than have to set-up roles is nice.
I did not find a way to download just psql, so I had to bring down a full postgres install down to my mac, but that was no big deal with brew:
brew install postgres
and since the DB service does not get activated it is the quickest way to get psql.
Update: Decided that having psql on my mac was a security hole, port forwarding, etc. I found that there is a simple Postgres install available for AMI Linux 2 under the AMI Extras rubric. The install command is fairly simple on your ami instance type.
sudo amazon-linux-extras install postgresql10
psql is fairly easy to use, however, important to keep in mind that any instructions to psql itself are escaped by a \. Documentation on psql can be found here. Recommend going through it at least once before executing the AWS recommended scripts.
To the extent you run tight security and have access to your RDS instances seriously restricted (which I do) don't forget to open up the ports from your AMI instance running Postgres to your RDS instance.
If your preference is a GUI then you can try to use PGAdmin4. It is the AWS recommended way of connecting to RDS Postgres instances according to the docs. I was unable to get any of the SSH tunneling features to work (which is why I ended up doing the localhost SSH mapping that I used for psql). I also found it to be rather buggy in other ways. Reading reviews of the product it seems that version 4 may not be the stablest of releases.
http://docs.aws.amazon.com/redshift/latest/dg/t_loading-tables-from-s3.html
Use the COPY command to load a table in parallel from data files on
Amazon S3. You can specify the files to be loaded by using an Amazon
S3 object prefix or by using a manifest file.
The syntax to specify the files to be loaded by using a prefix is as
follows:
copy <table_name> from 's3://<bucket_name>/<object_prefix>'
authorization;
update
Another option is to mount s3 and use direct path to the csv with COPY command. I'm not sure If it will hold 100GB effectively, but worth of trying. Here is some list of options on software.
Yet another option would be "parsing" s3 file part by part with something described here to a file and COPY from named pipe, described here
And the most obvious option to just download file to local storage and use COPY I don't cover at all
Also worth of mentioning would be s3_fdw (status unstable). Readme is very laconic, but I assume you could create a foreign table leading to s3 file. Which itself means you can load data to other relation...

How can I run PostgreSQL DB service under Local System account?

I'm using PostgreSQL DB in my application, I used to create special windows user account to run the DB service.
Now I need to run PostgreSQL service under Local System account! Is there a configuration in PostgreSQL to specify the user account which the service runs under?
Thanks,
Computer Management -> Services -> select the Postgres server service -> right click -> Properties -> check the Log on tab, just like any other Windows service.
(Might be a bit off with the naming, I don't have access to a Windows machine at home, but I'm sure you can improvise.)
Download the zip archived binaries from PostgreSQL site. Extract the contents. Create a folder, say 'data', for storing the db files etc. Open Windows Advanced System Configuration -> Environment Variables.
Create a new System Variable called PGDATA and set its value to the complete path of data folder, as in - 'C:\PostgreSQL\data'(without the single quotes).
Next in Path variable include the complete path to bin folder, as in - 'C:\PostgreSQL\bin;....' .
Once done, do the following...
Open command prompt(as Admin in Vista and Win 7 systems). Type: 'initdb' and press Enter. This will do the initial setup.
In the same command prompt window, type in: 'pg_ctl register -N PostgreSQL -U LocalSystem' and press enter.
Open run and type services.msc and check if the process 'PostgreSQL' has been created and that you can actually start it.
I don't believe that you need to do anything special to run this under a local system account. We use PostgreSQL databases here and we install them on local system accounts all the time. I have never seen a configuration setting in the installation that allows you to indicate a system account for the service.