As of PostgresSQL 9.6, access methods were introduced for core functionality. I have been making some modifications to PostgreSQL and I would like to recreate an access method- but there is nothing like CREATE OR REPLACE so I wanted to perform DROP ACCESS METHOD btree; and then create it again.
But I am presented with:
ERROR: cannot drop access method btree because it is required by the database system
Maybe I can drop this restriction since I am planning to create it again? How can I achieve my goal?
UPDATE: I suppose something interesting would be to create the same access method under a different name - but then how can I be sure that this is being used over the other is not clear to me.
There is no need to drop btree, and fortunately the system keeps you from doing so.
If you want to write a substitute for it, that's fine. Add it as a new access method and use it throughout. The presence of btree won't bother you or slow you down.
As pozs said, if you think you can improve PostgreSQL's B-tree implementation so that it would be a benefit for everybody, get in touch with pgsql-hackers.
Related
I'm working in a project that uses Catalyst and DBIx::Class.
I have a requirement where, under a certain condition, users should not be able to read or set a specific field in a table (e.g. the last_name field in a list of users that will be presented and may be edited by the user).
Instead of applying the conditional logic to each part of the project where that table field is read or set, risking old or new cases where the logic is missed, is it possible to implement the logic directly in the DBIx::Class based module, to never return or change the value of that field when the condition is met?
I've been trying to find the answer, and I'm still reading, but I'm somewhat new to DBIx::Class and its documentation. Any help would be highly appreciated. Thank you!
I‘d use an around Moose method modifier on the column accessor generated by DBIC.
This won‘t be a real security solution as you can still access data without the Result class, for example when using HashRefInflator.
Same for calling get_column.
Real security would be at the database level with column level security and not allowing the database user used by the application to fetch that field.
Another solution I can think of is an additional Result class for that table that doesn‘t include the column, maybe even defaulting to it and only use the one including the column when the user has a special role.
I'm currently trying to list all Triggers available in a PostgreSQL database, regardless of the tables, using the DBeaver GUI.
What I would like to get looks like this:
https://ibb.co/dJZXCo
A result displaying only the Triggers category would be enough, I just want to quickly identify where the triggers are in my base.
To do so, I've tried to use global filters on the base itself, but it only filters table names.
I don't know if I can configure search depth so it includes inner elements of tables; or if there is a dedicated syntax to process the Triggers category in tables; or if the solution doesn't have anything to do with filtering.
There might be a very simple solution, but I can't seem to find it.
Thank you!
I'd like to be able to purge the database of all data between Integration test executions. My first thought was to use an org.springframework.test.context.support.AbstractTestExecutionListener
registered using the #TestExecutionListeners annotation to perform the necessary cleanup between tests.
In the afterTestMethod(TestContext testContext) method I tried getting the database from the test context and using the com.mongodb.DB.drop() method. This worked ok, apart from the fact that it also destroys the indexes that were automatically created by Spring Data when it first bound my managed #Document objects.
For now I have fixed this by resorting to iterating through the collection names and calling remove as follows:
for (String collectionName : database.getCollectionNames()) {
if (collectionIsNotASystemCollection(collectionName)
database.getCollection(collectionName).remove(new BasicDBObject());
}
This works and achieves the desired result - but it'd be nice if there was a way I could simply drop the database and just ask Spring Data to "rebind" and perform the same initialisation that it did when it started up to create all of the necessary indexes. That feels a bit cleaner and safer...
I tried playing around with the org.springframework.data.mongodb.core.mapping.MongoMappingContext but haven't yet managed to work out if there is a way to do what I want.
Can anyone offer any guidance?
See this ticket for an explanation why it currently works as it works and why working around this issue creates more problems than it solves.
Supposed you're working with Hibernate and then trigger a call to delete the database, would you even dream to assume that the tables and all indexes reappear magically? If you drop a MongoDB database/collection you remove all metadata associated with it. Thus, you need to set it up the way you'd like it to work.
P.S.: I am not sure we did ourselves a favor to add automatic indexing support as this of course triggers the expectations that you now have :). Feel free to comment on the ticket if you have suggestions how this could be achieved without the downsides I outlined in my initial comment.
I have ormlite integrated into an application I'm working on. Right now I'm trying to build in functionality to easily switch from automatically inserting data to the database to outputting the equivalent collection of insert statements to a file for later use. The data isn't user input but still requires proper escaping to handle basic gotchas like apostrophes.
Ideas I've burned through:
Dao.create() writes to the database directly, so that's a no-go.
QueryBuilder can't handle inserts.
JdbcDatabaseConnection.compileStatement() might work but the amount of setup required is inappropriate.
Using a java.sql.PreparedStatement has a reasonable enough interface (if toString() returns the SQL like I would hope) but it's not compatible with ormlite's connection types.
This should be very easy and if it is, I can't find the right combination of method calls to make it happen.
Right now I'm trying to build in functionality to easily switch from automatically inserting data to the database to outputting the equivalent collection of insert statements to a file for later use.
Interesting. So one hack would be to use the MappedCreate class. The MappedCreate.build(...) method takes a DatabaseType and a TableInfo which is available from the dao.getTableInfo().
The mappedCreate.toString() exposed the generated INSERT statement (with a prefix) which might help but you would still need to convert the ? arguments to be the actual values with escaped quotes. That you would have to do in your own code.
Hope this helps somewhat.
In PostgreSQL, how can I prevent anyone (including superusers) from dropping some specific table?
EDIT: Whoa, did we have some misunderstanding here. Let's say there is a big, shared QA database. Sometimes people run destructive things like hibernate-generated schema on it by mistake, and I'm looking for ways to prevent such mistakes.
anyone (including superusers) from dropping some specific table?
Trust your peers.
You can do that by writing some C code that attaches to ProcessUtility_hook. If you have never done that sort of thing, it won't be exactly trivial, but it's possible.
Another option might be looking into sepgsql, but I don't have any experience with that.
A superuser is precisely that. If you don't want them to be able to drop things, don't make them a superuser.
There's no need to let users run as superusers pretty much ever. Certainly not automated tools like schema migrations.
Your applications should connect as users with the minimum required user rights. They should not own the tables that they operate on, so they can't make schema changes to them or drop them.
When you want to make schema changes, run the application with a user that does have ownership of the tables of interest, but is not a superuser. The table owner can drop and modify tables, but only the tables it owns.
If you really, truly need to do something beyond the standard permissions model you will need to write a ProcessUtility_hook. See this related answer for a few details on that. Even then a superuser might be able to get around it by loading an extension that skips your hook, you'll just slow them down a bit.
Don't run an application as a superuser in production. Ever.
See the PostgreSQL documentation on permissions for more guidance on using the permissions model.
I don't think you can do that. You could perhaps have super super users who are going to manage the dropping of everything first. OR have backups constantly, so the higher member of the hierarchy will always have the possibility of retrieving the table.
I don't know what the real original intention of this question is... but a lot of people seem to have hypothetical answers like trusting your peers or using least permissions models appropriately. Personally, this misses the point altogether and instead answers with something everyone probably knew already, which isn't particularly helpful.
So let me attempt a question + answer of my own: How do you put in safety locks to prevent yourself or others from accidentally doing something you shouldn't? If you think that this "should never happen" then I think your imagination is too narrow. Or perhaps you are more perfect than me (and possibly a lot of other people).
For the rest of us, here is a solution that works for me. It is just a little lock that is put wherever you want it to - using event triggers. Obviously, this would be implemented by a super-user of some sort. But the point of it is that it has no bearing on permissions because it is error-based not permission based.
Obviously, you shouldn't implement this sort of thing if production behavior is dependent on it. It only makes sense to use in situations where it makes sense to use. Don't use it to replace what should be solved using permissions and don't use it to undermine your team. Use common sense - individual results may vary.
CREATE SCHEMA testst;
CREATE OR REPLACE FUNCTION manual_override_required() RETURNS event_trigger AS
$$
DECLARE
obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_dropped_objects()
LOOP
RAISE INFO 'object_oid: %, object_type: %', obj.objid, obj.object_type;
RAISE info '%', obj.object_name;
IF obj.object_type = 'schema' and obj.object_name = 'testst' THEN
RAISE EXCEPTION 'You have attempted to DROP something which has been hyper-locked and requires manual override to proceed.';
END IF;
END LOOP;
END
$$
LANGUAGE plpgsql
;
DROP EVENT TRIGGER IF EXISTS lock_schema;
CREATE EVENT TRIGGER lock_schema
ON sql_drop
EXECUTE FUNCTION manual_override_required();
DROP SCHEMA testst;
-- produces error: "ERROR: You have attempted to DROP something which has been admin-locked and requires manual override to proceed."
-- To override the admin-lock (you have the permission to do this, it just requires two turns of a key and positive confirmation):
ALTER EVENT TRIGGER lock_schema DISABLE;
DROP SCHEMA testst;
-- now it works!
An example of how I use this is in automation workflows. I have to switch between dev and prod environments a dozen times a day and I can (i.e. have) easily lost track of which is which despite the giant flags and banners I've put up to remind myself. Dropping certain schemas should be a rare and special event in Prod (thus the benefit of a active-confirmation approach) whereas in dev I rebuild them all the time. If I maintain the same permission structures in Dev as in Prod (which I do) then I wouldn't be able to solve this.