I am new to OrientDB and working on database encryption.
Can anyone please guide me about followings:
How to encrypt database in OrientDB? and more importantly, can we execute quires on the encrypted database?
I tried to enable AES encryption but didn't see any encryption outcome. At the end, it allows database connection, where contents are unencrypted even with an incorrect encryption key.
According to the documentation, I performed following steps to enable database encryption:
------- create database with key1 ------
config set storage.encryptionKey Ohjojiegahv3tachah9eib==
create database remote:localhost/databases/encrypted-db root 12345 plocal
document -encryption=aes
CREATE CLASS Customer
CREATE PROPERTY Customer.id integer
CREATE PROPERTY Customer.name String
CREATE PROPERTY Customer.age integer
INSERT INTO Customer (id, name, age) VALUES (01,'satish', 25)
INSERT INTO Customer SET id = 02, name = 'krishna', age = 26
INSERT INTO Customer CONTENT {"id": "03", "name": "kiran", "age": "29"}
INSERT INTO Customer (id, name, age) VALUES (04,'javeed', 21), (05,'raja', 29)
SELECT FROM Customer
disconnect
------- open encrypted database with key2 (different from key1) ------
config set storage.encryptionKey Ohj11iegahv3tac1111111==
CONNECT remote:localhost/databases/encrypted-db root 12345
SELECT FROM Customer
OrientDB will show original data of Customer CLASS.
Encryption at rest is not supported on remote protocol yet. It can be used only with plocal. So you're using a non-encrypted database. The documentation wasn't very clear about that, sorry. I'm fixing the docs right now.
Related
When querying columns that are a Postgres domain type using JDBC (with driver https://jdbc.postgresql.org/ version 42.3.3), there doesn't seem to be any way of getting the domain information; only the underlying datatype is reported.
create domain account_uuid_type as uuid;
create domain customer_uuid_type as uuid;
create table test (
account_uuid account_uuid_type,
customer_uuid customer_uuid_type
);
insert into test
(account_uuid, customer_uuid)
values
(
'4c1210c2-e785-4462-926c-1789cd1aa88c',
'b155e10c-10cd-4a11-b427-d7c78397b617'
);
When querying the table from Java using select account_uuid, customer_uuid from test, the information from the PgResultSet is that the pgType values for the columns is uuid. There's no mention of the more specific domain information. Is there a way of asking Postgres to add the domain information into the metadata?
To comply with privacy guidelines, I want to binary obfuscate the contents of certain columns to other roles, including the administrative role/developers.
A table could look like this:
create table customer (
id serial primary key,
role text not null default session_user,
name text not null,
address text not null,
phone bigint default null
);
create policy customer_policy on customer
for all to public using (role = session_user);
In this example, the column contents such as name, address and phone should not be visible to other roles.
the policy only guarantees that roles of other database users cannot see the records, but the administrator with higher privileges, for example, can still see the data.
my idea is that a password is stored in another table that is created and changed by the respective role. the relevant columns are then encrypted and decrypted using this password.
How could this be implemented or does PostgreSQL already offer solutions for this?
I am having trouble thinking of a way to copy three fields out of a database into and append them to another table along with the current date. Basically what I want to do is:
DB-A: ID (N9), Name (C69), Phone (N15) {and a list of other fields I dont care about}
DB-B: Date (Todays date/time), Nane, Address, Phone (as above)
Would be great is this was a trigger in the DB on add or update of DB-A.
Greg
Quick and dirty using postgres_fdw
CREATE EXTENSION IF NOT EXISTS postgres_fdw ;
CREATE SERVER extern_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'foreignserver.co.uk', port '5432', dbname 'mydb');
CREATE USER MAPPING FOR myuser SERVER extern_server OPTIONS (user 'anotheruser');
-- Creating a foreign table based on table t1 at the server described above
CREATE FOREIGN TABLE foreign_t1 (
dba INT,
name VARCHAR(9),
phone VARCHAR(15)
)
SERVER extern_server OPTIONS (schema_name 'public', table_name 't1');
--Inserting data to a new table + date
INSERT INTO t2 SELECT dba,name,phone,CURRENT_DATE FROM foreign_t1;
-- Or just retrieving what you need placing the current date as a column
SELECT dba,name,phone,CURRENT_DATE FROM foreign_t1;
I have two databases that are alike, one called datastore and the other called datarestore.
datarestore is a copy of datastore which was created from a backup image. The problem is that I accidentally deleted a little too much data from datastore.
Both databases are located on different AWS instances and I typically connect to them using pgAdmin III or Python to create scripts that handle the data.
I want to get the rows that I accidentally deleted from datastore which are in datarestore into datastore. Does anyone have any idea of how this can be achieved. Both databases contain close to 1.000.000.000 rows and are on version 9.6.
I have seen some backup/import/restore options within pgAdmin III, I just don't know how they work and if they support my needs? I also thought about creating a python script, but querying my database has become pretty slow, so this seems not to be an option either.
-----------------------------------------------------
| id (serial - auto incrementing int) | - primary key
| did (varchar) |
| sensorid (int) |
| timestamp (bigint) |
| data (json) |
| db_timestamp (bigint) |
-----------------------------------------------------
If you preserved primary keys between those databases then you could create foreign tables pointing from datarestore to datastore and check what keys are missing (using for example select pk from old_table except select pk from new_table) and fetch those missing rows using the same foreign table you created. This should limit your first check for missing PK to just index only scans (+ network transfer) and then it will be index scan to fetch missing data. If you are missing only small part of it then it shouldn't take long.
If you require more detailed example then I'll update my answer.
EDIT:
Example of foreign table/server usage
Those commands need to be exuecuted on datarestore (or datastore if you choose to push data instead of pulling it).
If you don't have foreign data wrapper "installed" yet:
CREATE EXTENSION postgres_fdw;
This will create virtual server on your datarestore host. It is just some metadata pointing at foreign server:
CREATE SERVER foreign_datastore FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host 'foreign_hostname', dbname 'foreign_database_name',
port '5432_or_whatever_you_have_on_datastore_host');
This will tell your datarestore host what user should it connect as when using fdw on server foreign_datastore. It will be used only for your_local_role_name logged in on datarestore:
CREATE USER MAPPING FOR your_local_role_name SERVER foreign_datastore
OPTIONS (user 'foreign_username', password 'foreign_password');
You need to create schema on datarestore. It is where new foreign tables will be created.
CREATE SCHEMA schema_where_foreign_tables_will_be_created;
This will log in to remote host and create foreign tables on datarestore, pointing to tables at datastore. ONLY tables will be done this way.
No data will be copied, just structure of tables.
IMPORT FOREIGN SCHEMA foreign_datastore_schema_name_goes_here
FROM SERVER foreign_datastore INTO schema_where_foreign_tables_will_be_created;
This will return list of id that are missing in your datarestore database for this table
SELECT id FROM foreign_datastore_schema_name_goes_here.table_a
EXCEPT
SELECT id FROM datarestore_schema.table_a
You can either store them in temp table (CREATE TABLE table_a_missing_pk AS [query from above here]
Or use them right away:
INSERT INTO datarestore_schema.table_a (id, did, sensorid, timestamp, data, db_timestamp)
SELECT id, did, sensorid, timestamp, data, db_timestamp
FROM foreign_datastore_schema_name_goes_here.table_a
WHERE id = ANY((
SELECT array_agg(id)
FROM (
SELECT id FROM foreign_datastore_schema_name_goes_here.table_a
EXCEPT
SELECT id FROM datarestore_schema.table_a
) sub
)::int[])
From my tests, this should push-down (meaning send to remote host) something like that:
Remote SQL: SELECT id, did, sensorid, timestamp, data, db_timestamp
FROM foreign_datastore_schema_name_goes_here.table_a WHERE ((id = ANY ($1::integer[])))
You can make sure it does by running explain verbose on your full query to see what plan it will execute. You should see Remote SQL in there.
In case it does not work as expected, you can instead create temp table as mentioned earlier and make sure that this temp table is on datastore host.
Alternative approach would be to create foreign server on datastore pointing to datarestore and push data from your old database to new one (you can insert into foreign tables). This way you won't have to worry about list of id not being pushed down to datastore and instead fetching all data and filtering them afterwards (with would be extremely slow).
I am trying to run the following script from SQL Server Management Studio:
INSERT [Truck].[Driver] ([DriverId], [CorporationId], [DriverNumber], [Name], [PhoneNumber])
VALUES (N'b78f90a6-ed6d-4f0e-9f35-1f3e9c516ca9', N'0a48eeeb-37f6-44de-aff5-fe9107d821f5', N'12', N'Unknown', NULL)
And I'm getting this error:
Msg 229, Level 14, State 5, Line 1
The INSERT permission was denied on the object 'Driver', database 'SuburbanPortal2', schema 'Truck'.
I can manually add this in edit mode and I get no errors. I have every permission I can think of set for my users. This is a local database logging in as a local user that I'm testing some data on so I could care less about security.
But, here are the settings for the database for my user:
Any suggestions?
-- Use master
USE msdb;
go
-- Make database
CREATE DATABASE SuburbanPortal2;
go
-- Use the database
USE SuburbanPortal2;
GO
-- Make schema
CREATE SCHEMA Truck AUTHORIZATION dbo;
go
-- Make table
CREATE TABLE Truck.Driver
(
[DriverId] uniqueidentifier,
[CorporationId] uniqueidentifier,
[DriverNumber] varchar(64),
[Name] varchar(128),
[PhoneNumber] varchar(12)
);
-- Add data
INSERT [Truck].[Driver] ([DriverId], [CorporationId], [DriverNumber], [Name], [PhoneNumber])
VALUES (N'b78f90a6-ed6d-4f0e-9f35-1f3e9c516ca9', N'0a48eeeb-37f6-44de-aff5-fe9107d821f5', N'12', N'Unknown', NULL);
GO
This code setups a sample database like you have. I have no issues with the insert.
Who is the owner of the schema??
If you want to hide tables from one database group and another, add your user to the database group.
Make the database group the owner of the schema. I think you might be having a schema ownership issue ...
Can you drill into database -> security -> schemas -> Truck, right click and show me the owner of the schema. Please post image.
Also, remove all database permissions from the user except for db_owner.