Readonly postgres database on Dropbox - postgresql

This is on several MacBook Pros and iMac, all running Monterey (OSX 12.3)
I would like to store a readonly postgres database on my Dropbox, so that I can use access it from multiple computers. I realize storing a writeable database is unworkable, but I'm not trying to do that.
I created a database under /Users/me/Dropbox/dbs on my MacBook Pro. But the other machines don't see the contents of the directory.
I figured it was because postgres has a different UID on the various machines, so I tried doing chown -R me:staff on the db database directory on the MacBook Pro. That made the directories and their contents visible in the Dropbox on the other machines, but I can't start postgres on the MacBook Pro (haven't tried elsewhere yet), because of file permissions, even after making the complained of file go=rx. I'm thinking since the file is in group=staff and not in postgres that's not going to work.
Any ideas? For example: can I create the database as me:staff, rather and postgres:postgres?

Like Laurenz says, this is never going to work properly. The closest you could get I suspect is some foreign data wrapper to the read-only shared directory. That's not going to provide the experience you are looking for I suspect though.
The SQLite wrapper might be your best bet. Store the shared data in SQLite and access it through PG. https://github.com/pgspider/sqlite_fdw

PostgreSQL is not Microsoft Access, it is a relational database with a client-server architecture. It works by starting the database server on the data directory and then connecting to the server process with a client. You can only run a single instance of the database server on a single data directory, and that is all you need, since many clients can connect to the server at the same time. The database directory has to be writable by the server and indeed is modified, even if you only perform read operations in the database.

Related

I dont know how Postgresql created user on my mac

Two days ago, i started learning postgresql. Most tutorials I followed online were either old or the codes wont just work on my mac. I followed a lot of tutorials that did a lot of totally different things.
When i switched on my system today. I noticed Postgresql created a user on my mac. I don't know what this is or maybe i used the wrong CLI code.
When I tried viewing the user, I saw this
should I delete this user or it has a function?
postgres user account
Creating a user account specifically for Postgres, commonly named postgres, is a normal part of a Postgres installation. Your installer app likely prompted you for a password to assign to this new user account.
One reason for this is security: The database’s data files and security configuration files are stored in folders owned by the postgres user. So if your main user account is hijacked, the intruder does not yet have access to the database (often the most valuable thing in storage). The intruder must jump through more hoops to compromise Postgres. Also, the separate ownership prevents other apps from inadvertently stomping on the Postgres files.
You will find Postgres is much more enterprise-oriented than other products such as MySQL. This means locking-down for security. Another example: Postgres by default is configured to not accept connections over the network. To enable connections from other computers, you must change the configuration. Inconvenient for the beginner, but more secure. Like a bar on your car steering wheel and deadbolts on your doors, more security always means more steps to take and more annoyance.
Use a virtual machine
Installing the postgres user account is one of the things that makes Postgres a rather heavyweight installation. I suggest to those learning Postgres to use a virtual machine for Postgres. Something like:
Parallels or Fusion or VirtualBox on your own computer
Cloud server such as FreeBSD on DigitalOcean.com.
To remove Postgres, simply discard the vm.
Postgres.app for macOS
Another option for a Mac user is Postgres.app, created by the person who built one of the first Postgres-as-a-Service implementations (on Heroku). I have not used Postgres.app, but I understand it wraps Postgres, so it does not install the postgres user account. Also, Postgres starts and stops when launch and quit the app, rather than running in the background all the time.
Be aware: you may have conflicts with Postgres.app on a Mac where you already have a conventional installation. I suggest you first carefully remove the conventional Postgres from your Mac before installing Postgres.app. Uninstalling involves finding and deleting various files and folders in various places.
Database-as-a-Service (DBaaS)
Another option to avoid local installation is the increasing choices for running Postgres as a service. This is sometimes referred to as “managed Postgres” because the vendor maintains the installation of Postgres on your behalf. You simply use Postgres to create your database, but you do not fully control Postgres in such a service.
Some examples:
Heroku
Digital Ocean
Azure
Amazon Web Services (AWS)
ElephantSQL
My experience
Personally, I often install Postgres on a Mac using the installer by EnterpriseDB.com. That company sells added-value versions of Postgres, but kindly provides an installer for plain-vanilla Postgres, as a service to the community.
I have also used that same installer from EnterpriseDB.com to install onto a Parallels VM running macOS as the guest OS within the VM on a MacBook Pro running macOS as the host OS. You can easily configure the VM to share the host Mac’s IP address on the network, or you can give the VM its own network address which might be handy for demo/dev/test work.
Thirdly, I have installed Postgres on FreeBSD on DigitalOcean.com.
All three of these options have worked quite well for me. Which is preferable depends on the scenario. For example, the DigitalOcean.com approach is good if I want colleagues to be able to reach the database 24x7 without my own MacBook being available.
This discussion is for development work. For mission-critical deployment, I strongly recommend using heavy-duty server equipment with error-correcting memory and redundant storage such as RAID or ZFS pool. Postgres is extremely reliable but depends, of course, on reliable hardware.
Your tag says Postgres 9.1. That version is quite old now. I suggest using the latest version. By the way, the version numbering system has changed for postgres. The first number is now the roughly-annual release number likely requiring you to dump and reload data to upgrade, and the second number is compatible updates.
As pointed out by #basil-bourque, the account is required for several reasons.
That said, if it annoys you to have the PostgreSQL showing up in the login screen -as it did me-, you can remove it as long as you have admin user rights in MacOS.
Apple Support gives the following command to hide a user from the login screen:
$ sudo dscl . create /Users/hiddenuser IsHidden 1
However, since the PostgreSQL user was not included listed in the login window at installation, that command will yield no result -at least in Catalina, which is my OS.
You should use the following two commands instead, as suggested by josemarluedke:
## add postgres to the list of hidden users on login screen
$ sudo defaults write /Library/Preferences/com.apple.loginwindow HiddenUsersList -array-add 'postgres'
## instruct not to show any hidden accounts at login
$ sudo defaults write /Library/Preferences/com.apple.loginwindow SHOWOTHERUSERS_MANAGED -bool FALSE
Worked for me!

Change path of Firebird Secondary database files

I have created a Firebird multi-file database
Main Database file D:\Database\MainDB.fdb
Secondary files (240 Files) located under D:\Database\DBFiles\Data001.fdb to D:\Database\DBFiles\Data240.fdb
When copy database to another location and trying to open it Firebird doesn't locate the files if they are not in D:\ partition
I want Firebird to locate the secondary files under Database\DBFiels folder at the new path.
So if I copy the database to C:\Database\MainDB.fdb
Firebird would open Data001.fdb in new path like C:\Database\DBFiles instead of old path in D:\Database\DBFiles where they were initially created
Can that be done with Firebird? if not, then how it should be done?
Update:
Finally I found out it's not possiable to change Firebird database secondary files usign Firebird.
but I found this Firebird FAQ mention GLINK tool but It doesn't support Firebird 3.x so I didn't test it, and It's not recommended to use it even with supported versions of Firebird.
Done what exactly?
UPD. I edited the very vague original question to make clear WHAT the topic starter wants.
You can not reliably "copy files with Firebird" - Firebird is not files copying tool. You can to a degree use EXTERNAL TABLE for raw files access, but very limited and not upon the database itself.
It is dangerous practice to "copy databases" while Firebird is working, because you would only copy part of the data. The recently updated data that is in memory cache but did not yet made it on disk would be lost. The database file would be inconsistent with some data updated and some not yet. When you "copy database files" you have first to shutdown either those databases or even the whole Firebird server.
Firebird has it's own tools for moving databases around - and those are called backup/restore tools. Maybe what you need is nbackup tool, if gbak is too slow for you.
Finally, you can list files that comprise the database. You can do it via gstat utility or via "Services API" it uses. You also can select from RDB$FILES system table. However what would you do after you did it? The very access to the database makes it badly suited for consequent copying (#2). You would perhaps need to shutdown database, turn it to read-only AND single-user state, and only then attach to it and read RDB$FILES. And after copying done - you would have to de-shutdown the database. Kinda much more complex than nbackup.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gstat-example-header.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-files.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gbak.html
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/nbackup.html

Database restore from a hacked system

A linux VM with postgres 9.4 was hacked into. (Two processes taking 100% cpu, weird files in /tmp, did not reoccur after kill(s) and restart.) It was decided to install the system from scratch on a new machine (with postgres 9.6). The only data needed was in one of postgres databases. A pg_dump of the database was made after the attack.
Regardless of whether the data - the tables/rows/etc. - were modified during the attack: is it safe to restore the database in the new system?
I consider using pg_restore with the -O option (ignores the user permissions)
The two dangers are:
important data could have been modified
back doors could have been installed in your database
With the first, you're on your own how to verify that your data are ok. The safest thing would be to use a backup from before the machine was compromized, but this would mean data loss.
For the second, I would run a pg_dumpall -s and spend a day reading it carefully. Compare it with a dump from a backup made before the breach. Watch out for weird object and column names and functions with SECURITY DEFINER.

Postgres 9.2 pg_largeobject tablespace

I am currently moving some data around and I am running into an interesting issue.
I have a CentOS server (6.3) up and running with Postgres 9.2 on a server with limited built in disk space; however, I do have a large amount of extremely reliable external network disk space available.
I have set the tablespace to a directory on this storage devise for my database and everything seems to be working well, until...
I realized that I have a large amount of BLOB data that needs to be stored in pg_largeobject.
I have been goggling how to set the tablespace of pg_largeobject and I did find some results, but they are horribly out dated.
I did find one article that looks promising, but I'm hesitant because the thread also references that things will/should have changed.
I have two questions...
In an ideal world, I would like to move all of postgres (including pg_largeobject) onto this external storage for ease of maintenance. Is this possible?
If not, how can I get pg_largeobject to use my network storage?
As you alluded to, your best bet is to move the entirety of PostgreSQL onto the remote storage, assuming that storage uses a reliable file network block device like iSCSI, ATAoE or NBD. I wouldn't recommend running Pg on NFS, and running it on CIFS/SMBFS just won't work.
Just:
Make a backup
Take a note of the output of SHOW data_directory; in psql
Shut PostgreSQL down
Move the data directory (the folder containing pg_xlog, pg_clog, etc) to the remote storage
Adjust the permissions on the parent directories for the datadir's new location to make sure the postgres user, postgres, group or others permissions block has at least execute on each parent directory so it can traverse the tree.
Adjust your system startup scripts to set the new location as the PostgreSQL datadir or symlink the old datadir location (output by SHOW data_directory) to the new location.
Start PostgreSQL
Unfortunately, different systems and packages find the datadir different ways. Debian/Ubuntu use pg_wrapper, for example.

where do I host my sqlite3 database?

I'm building an iPhone app which uses the sqlite3 library - are there any special requirements for hosting the database itself since its using sqlite3? Can I host it on one of my GoDaddy accounts for example? I get like 25 free databases with my current account with them but I'm just wondering if the SQL databases they can host will work or if they need to be sqlite3 format specifically...
any ideas?
You should save the database file to the Library/Caches folder instead of hosting it online, unless you have a specific reason to keep it online.