Does Postgres use local resources when connecting to a server? - postgresql

I am new to Postgres and wanted to understand if i make a remote server for Postgres and use PGadmin to connect to the remote server, will it use local resources to run the queries?
I tried it on a user machine here and it seems to be the case and would like the queries submitted to be run on the server itself and not consume local resources. Any suggestions would be helpful.
Thanks
Saurabh

No, the query is run on the server. However the download and the display of the results can take some time and can take ressources on the client, depending on the size of the result set.

Related

SQL Live Backup Over Intermittent Connection

I have a few PCs that have local PostgreSQL databases running, just logging data. Data is only ever inserted, never removed or updated. The remote PCs are connected to the internet by cellular modem and depending on their location, often do not have internet access. When they do have an internet connection I would like them to push a copy of their databases to a central location and keep the remote database up to date with any new data. Essentially, I need an 'rsync' for databases.
At first it seemed like what I need is to set up PostgreSQL Hot-Standby but I'm unsure if this is actually what I need because my situation seems to differ from the examples I've seen.
Each remote PC has a Postgres server with a single database that has a unique name, the tables within the DBs have generic names. I would like to synchronize these databases to a single remote Postgres server. I think this should be okay due to the unique DB names.
My connectivity is very intermittent, days to weeks without a connection. I've seen PgAdmin be very reliable despite a terrible (cellular) internet connection, if Postges Hot-Standby is the same I may be alright.
As far as I can see my options are either to set up PostgreSQL Hot-Standby, or roll my own solution. I don't want to roll my own solution. However it is simple enough if I can't find anything better; a Python daemon run by systemd to find the diff between the local and remote DB, then push the new rows from the local to the remote DB. But I'm sure someone has solved this problem, I just haven't found the solution yet.
You don't need hot standby (which is the PostgreSQL term for being able to query the replicated database), but streaming replication. You need a central standby server for each intermittently connected remote database server. If you use replication slots, you can be sure that replication will never fall behind.

How do I get started if I want to use PostgreSQL for local use?

Good day,
Currently I use MS Access at home for several Databases (for personal use).
At work, I use PostgreSQL, which is infinity times better. I want to start using postgres for my personally used databases, but I don't know where to start.
I've tried reading the documentation, but still don't know how to start. I don't have a server at home; is it possible I can just make a local database/tablespace? Or would I have to host a virtual server?
Note that I am willing to use other open source databases if there is an easy option out there - MS access is just so... terrible.
Thanks,
So, it seems you have Windows at home. You just need to download full installer for PostgreSQL:
http://www.postgresql.org/download/windows/
After installation it will automatically add starting postgres server as a service on local machine. That means, server will always run in background, but you can disable that later, or just uninstall.
After that, you can use pgAdmin (included in default installation package) or other client tools to access the DB engine.
UPD in pgadmin, create connection with this settings:
'localhost' as hostname;
port - 5432;
user, database - postgres (for testing purpose only - you should create your own user and tables with restricted rights later).
Password for postgres (that is DB admin user) must be entered during installation process.
Server settings are stored somewhere here:
"C:\Program Files\PostgreSQL\9.3\data"
pg_hba.conf - Client Authentication Configuration File
postgresql.conf - Configuration File

impact of sqlserver Name change on production database

I working on Creating a replication Between sqlserver on a non-trusted domain.But i have come to realize that i can only use the servername instead of the ip address of my server.I tried using the current server name i got from the sysadmin.But still connecting with the servername still display the same error:
sqlserver replication requires the actual servername to make connection
Connection throught a server alias,IP address, any other alternate are not supported
Specify the actual server name,'WIN-2JQ9ZRN3T'.(Replication.Utitilies).
I ran a script to show my server name:
Select ##SERVERNAME
The result was:WIN-2JQ9ZRN3T.This is very strange for me.I cannot connect with name i got from my sysad throught management studio,but on remote desktop i can remote connection using that name.
Now i want to update my servername using this script i came across on the internet:
sp_dropserver old-server-name
go
sp_addserver real-server-name, LOCAL
go
But i don't know what the impact would be because my predecessor configure a linkserver on the same server.Please advise
You have proposed a good solution: you have to update the server name accordingly and restart SQL Server, otherwise you'll not be able to use replication.
On the other hand, please check the replication component is installed, this will save you to make another restart.
Regarding your concern with linked servers, please run
SP_LINKEDSERVERS
and check if old server name appears in SRV_DATASOURCE column.
Another option could be to setup a sql server alias:
http://www.mssqltips.com/sqlservertip/1620/how-to-setup-and-use-a-sql-server-alias

How to check remote Oracle server is up and running

I have my Oracle server installed in a remote machine and I want a script at my local machine which will check whether Oracle server is up and running or not. I know this can be check by creating a connection through sqlplus or JDBC. But in this case oracle client won't be present and I am saving JDBC approach as my last option. So is there any other simpler way to check this, which can be easily implemented in a shell script???
Thanks
Not really. The only way to be certain that the database is responding to queries is to run a query on it, such as the venerable:
select dummy from dual

MAMP, One computer, two users, shared database

Two developers often share the same system, and both have local copies of the project and try to connect to a local database. Both users can see the database, but tables and their data are only visible to the database's original author.
We've tried giving all permissions to both users, but it seems the only thing that works is to duplicate the database.
Is there a way around this?
Thanks in advance!
You would probably be better off hosting a separate MySQL instance on it's own machine, and then configure your code to connect to that database instead of the MAMP-hosted one. That being said, you will need to open the port on the firewall of the MAMP(0) for the MAMP-MySQL (usually port 8889). Then, the script on the MAMP(1) needs to be configured to connect to MAMP(0) database on the newly opened port.
You will also need to GRANT privileges for user(1) on the MAMP-host(0) database.
A connect string from MAMP(1) would look like:
$db_url = 'mysqli://user:password#mamp0.local:8889/es_forms_drupal';
Hopefully that makes some sense.