How to use a private key obtained from a database pem format phpseclib - rsa

I have stored some pem files in a database, I now wish to use this to load the key in order to ssh into the box in question. however when my code reaches the $key->load( $pub ); line in my code it errors. the code was previously working by having the keys as strings in the files but prefer to have them in the database as it'll be easier to maintain as more keys are required.
I'm using i think phpseclib as there is no loadKeys function in the RSA file. Code works when the key is pasted into the script. I pasted the code into the database directly using phpmyadmin. My dev machine is Win 10 but when live It will be on an internal linux server
$lightsail = new lightsail();
$pub = $lightsail->getPemByName();
$pub = str_replace("\r", '', $pub ); // Noticed key returned had \r\n so corrected it but still fails
$key = new RSA();
$key->load( $pub );
Error i see is as follows
( ! ) Fatal error: Uncaught Error: Call to a member function toBytes() on string in something\phpseclib\Crypt\RSA.php on line 724
( ! ) Error: Call to a member function toBytes() on string in something\phpseclib\Crypt\RSA.php on line 724
Call Stack
# Time Memory Function Location
1 0.2199 430880 {main}( ) ...\dequeue.php:0
2 40.8792 1169040 backup->backupDatabase( ) ...\dequeue.php:181
3 76.7275 2748016 phpseclib\Crypt\RSA->load( ???, ??? ) ...\my.class.php:986
I'm thinking that it is the paste of the pem into phpmyadmin is the issue here? I've been unable to find any examples which use $key->load() instead of $key->loadKey() with a pem file and even less of using a pem key in a database.
My next approach will be to load the file contents if this approach is after all a dead end.

In the end i loaded it from the file ( $pub = file_get_contents($path);), this worked.

Related

Snowflake - Fail COPY INTO (Can't parse '0' as date with format 'YYYYMMDD')

My pipe is executing a COPY INTO command every time a parquet file is loaded into a STAGED location in AWS S3, that's working just fine (the execution).
This is my copy query: (summarized)
copy into table_name
from (
TRY_TO_DATE(
$1:int_field::varchar,
'YYYYMMDD'
) as date_field
from #"stage-location"/path/path2/ (FILE_FORMAT = > c000)
) ON_ERROR = "SKIP_FILE_1%" PATTERN = ".*part.*"
So, I convert $1:int_field (type:int) to VARCHAR (::varchar) and then parse this varchar to DATE in 'YYYYMMDD' format. That works fine for int_field that conform to this format, but when the field is 0, the load fails (only when is executed by the pipe)
When the pipe executed the COPY COMMAND by it self I checked the COPY_HISTORY and got the following error:
Can't parse '0' as date with format 'YYYYMMDD'
And of course the load fails...
FAILED LOAD
Here is when the thing gets interesting: when I execute this SAME copy command by myself in the Worksheets, load goes smoothly:
OK LOAD
I tried:
VALIDATE, VALIDATION_MODE, VALIDATE_PIPE_LOAD, but This function does not support COPY INTO statements that transform data during a load, like mine.
FILE_FORMAT= (FORMAT_NAME=c000 DATE_FORMAT='YYYYMMDD') ON_ERROR = "SKIP_FILE_1%" >>> SAME ISSUE, the file's only loaded when I execute the COPY COMMAND with my own hand.
I thought the problem was the "ON_ERROR" option, but I can't erase it (I think), I need to filter the REAL errors :(
Maybe is some SESSION problem or so, I read smthg about DATE_INPUT_FORMAT, but I can't detect the exact problem to solve this.
Can someone help me? Thanks!
On my tests, I see that it fails all the time (even the stand-alone COPY does not work). On the other hand, querying from the stage file works as expected.
select TRY_TO_DATE(
$1::varchar,
'YYYYMMDD'
) as date_field
from #my_stage; -- works
copy into testing
from (
select
TRY_TO_DATE(
$1::varchar,
'YYYYMMDD'
)
from #my_stage
) ON_ERROR = "SKIP_FILE_1%"; -- fails with "Date '0' is not recognized"
It seems there is an issue with TRY_TO_DATE when running as part of a COPY transformation. By the way, I tested TRY_TO_NUMBER, and it works.
You should submit a case to the Snowflake support, so the development team can investigate the issue.

Add a missing key to JSON in a Postgres table via Rails

I'm trying to use update_all to update any records that is missing a key in a JSON stored in a table cell. ids is the ids of those records and I've tried the below...
User.where(id: ids).
update_all(
"preferences = jsonb_set(preferences, '{some_key}', 'true'"
)
Where the error returns is...
Caused by PG::SyntaxError: ERROR: syntax error at or near "WHERE"
LINE 1: ...onb_set(preferences, '{some_key}', 'true' WHERE "user...
The key takes a string value so not sure why the query is failing.
UPDATE:
Based on what was mentioned, I added the parentheses and also added / modified the last two arguments...
User.where(id: ids).
update_all(
"preferences = jsonb_set(preferences, '{some_key}', 'true'::jsonb, true)"
)
still running into issues and this time it seems related to the key I'm passing
I know this key doesn't currently exist for the set of ids
I added true for create_missing so that 1 isn't an issue
I get this error now...
Caused by PG::UndefinedFunction: ERROR: function jsonb_set(hstore, unknown, jsonb, boolean) does not exis
some_key should be a key in preferences
You're passing in raw SQL so you are 100% responsible for ensuring that is actually valid SQL. What you have there isn't. Check your parentheses:
User.where(id: ids).
update_all(
"preferences = jsonb_set(preferences, '{some_key}', 'true')"
)
If you look more closely at the error message it was telling you there was a problem precisely at the introduction of the WHERE clause, and right after ...true' so that was a good place to look for problems.
Syntax errors like this can be really annoying, but don't forget your database will usually do its best to pin down the place where the problem occurs.

How to use Net::DNS::RR:TSIG with key files generated by ddns-confgen?

I have been using nsupdate for a long time in various scripts dealing with dynamic DNS zone updates without any issue. I have always used TSIG to authenticate the requests against the DNS server, where the keys have been generated by ddns-confgen. That means that I didn't use key pairs like those generated by dnssec-keygen; rather, the keys file format is like the following:
key "ddns-key.my.domain.example" {
algorithm hmac-sha512;
secret "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefijklmnopqrstuvwxyzabcdefghij==";
};
The respective zone configuration then contains:
update-policy {
grant ddns-key.my.domain.example name host.my.domain.example ANY;
};
Now I have a more complicated task and try to solve it by a Perl script. I have studied the documentation of Perl::DNS::Update, Perl::DNS and Perl::DNS:RR:TSIG and have studied a lot of examples which should make the usage of those modules clear.
However, each example I saw, when coming to signing a request via TSIG, used key files in the format dnssec-keygen produces, and not key files in the format I have. And indeed, something like
$o_Resolver = new Net::DNS::Resolver(nameservers => ['127.0.0.1']);
$o_Update = new Net::DNS::Update('my.domain.example', 'IN');
$o_Update -> push(update => rr_del('host A'));
$o_Update -> push(update => rr_add('host 1800 IN A 192.0.2.1'));
$o_Update -> sign_tsig('/etc/bind/ddns-key.my.domain.example.key');
$o_Reply = ($o_Resolver -> send($o_Update));
does not work, producing the following message:
TSIG: unable to sign packet at /path/to/script.pl line 240.
unknown type "ddns-key.my.domain.example" at /usr/local/share/perl/5.20.2/Net/DNS/RR.pm line 669.
file /etc/bind/ddns-key.my.domain.example.key line 1
at /usr/local/share/perl/5.20.2/Net/DNS/RR/TSIG.pm line 403.
TSIG: unable to sign packet at /path/to/script.pl line 240.
I suppose I now have two options: Either use keys in the format dnssec-keygen produces, which seem to be directly usable with Net::DNS and its friends, or construct the TSIG key manually as shown in the docs:
my $key_name = 'tsig-key';
my $key = 'awwLOtRfpGE+rRKF2+DEiw==';
my $tsig = new Net::DNS::RR("$key_name TSIG $key");
$tsig->fudge(60);
my $update = new Net::DNS::Update('example.com');
$update->push( update => rr_add('foo.example.com A 10.1.2.3') );
$update->push( additional => $tsig );
[Of course, I wouldn't hard-code the key in my Perl script, but read it from the key file instead.]
Switching to another key file format would mean changing the DNS server configuration, which is not an elegant solution. "Manually" reading the key files and then "manually" constructing the keys is not very satisfying either, hence the question:
Did I understand correctly that it is not possible to use key files in the ddns-confgen format directly with Net::DNS and its sub-modules to TSIG-sign DNS update requests?

SQLAlchemy, Psycopg2 and Postgresql COPY

It looks like Psycopg has a custom command for executing a COPY:
psycopg2 COPY using cursor.copy_from() freezes with large inputs
Is there a way to access this functionality from with SQLAlchemy?
accepted answer is correct but if you want more than just the EoghanM's comment to go on the following worked for me in COPYing a table out to CSV...
from sqlalchemy import sessionmaker, create_engine
eng = create_engine("postgresql://user:pwd#host:5432/db")
ses = sessionmaker(bind=engine)
dbcopy_f = open('/tmp/some_table_copy.csv','wb')
copy_sql = 'COPY some_table TO STDOUT WITH CSV HEADER'
fake_conn = eng.raw_connection()
fake_cur = fake_conn.cursor()
fake_cur.copy_expert(copy_sql, dbcopy_f)
The sessionmaker isn't necessary but if you're in the habit of creating the engine and the session at the same time to use raw_connection you'll need separate them (unless there is some way to access the engine through the session object that I don't know). The sql string provided to copy_expert is also not the only way to it, there is a basic copy_to function that you can use with subset of the parameters that you could past to a normal COPY TO query. Overall performance of the command seems fast for me, copying out a table of ~20000 rows.
http://initd.org/psycopg/docs/cursor.html#cursor.copy_to
http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.Engine.raw_connection
If your engine is configured with a psycopg2 connection string (which is the default, so either "postgresql://..." or "postgresql+psycopg2://..."), you can create a psycopg2 cursor from an SQL Alchemy session using
cursor = session.connection().connection.cursor()
which you can use to execute
cursor.copy_from(...)
The cursor will be active in the same transaction as your session currently is. If a commit or rollback happens, any further use of the cursor with throw a psycopg2.InterfaceError, you would have to create a new one.
You can use:
def to_sql(engine, df, table, if_exists='fail', sep='\t', encoding='utf8'):
# Create Table
df[:0].to_sql(table, engine, if_exists=if_exists)
# Prepare data
output = cStringIO.StringIO()
df.to_csv(output, sep=sep, header=False, encoding=encoding)
output.seek(0)
# Insert data
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.copy_from(output, table, sep=sep, null='')
connection.commit()
cursor.close()
I insert 200000 lines in 5 seconds instead of 4 minutes
It doesn't look like it.
You may have to just use psycopg2 to expose this functionality and forego the ORM capabilities. I guess I don't really see the benefit of ORM in such an operation anyway since it's a straight bulk insert and dealing with individual objects a la an ORM would not really make a whole lot of sense.
If you're starting from SQLAlchemy, you need to first get to the connection engine (also known by the property name bind on some SQLAlchemy objects):
engine = create_engine('postgresql+psycopg2://myuser:password#localhost/mydb')
# or
engine = session.engine
# or any other way you know to get to the engine
From the engine you can isolate a psycopg2 connection:
# get a psycopg2 connection
connection = engine.connect().connection
# get a cursor on that connection
cursor = connection.cursor()
Here are some templates for the COPY statement to use with cursor.copy_expert(), a more complete and flexible option than copy_from() or copy_to() as it is indicated here: https://www.psycopg.org/docs/cursor.html#cursor.copy_expert.
# to dump to a file
dump_to = """
COPY mytable
TO STDOUT
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
# to copy from a file:
copy_from = """
COPY mytable
FROM STDIN
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
Check out what the options above mean and others that may be of interest to your specific situation https://www.postgresql.org/docs/current/static/sql-copy.html.
IMPORTANT NOTE: The link to the documentation of cursor.copy_expert() indicates to use STDOUT to write out to a file and STDIN to copy from a file. But if you look at the syntax on the PostgreSQL manual, you'll notice that you can also specify the file to write to or from directly in the COPY statement. Don't do that, you're likely just wasting your time if you're not running as root (who runs Python as root during development?) Just do what's indicated in the psycopg2's docs and specify STDIN or STDOUT in your statement with cursor.copy_expert(), it should be fine.
# running the copy statement
with open('/path/to/your/data/file.csv') as f:
cursor.copy_expert(copy_from, file=f)
# don't forget to commit the changes.
connection.commit()
You don't need to drop down to psycopg2, use raw_connection nor a cursor.
Just execute the sql as usual, you can even use bind parameters with text():
engine.execute(text('''copy some_table from :csv
delimiter ',' csv'''
).execution_options(autocommit=True),
csv='/tmp/a.csv')
You can drop the execution_options(autocommit=True) if this PR will be accepted

After querying DB I can't print data as well as text anymore to browser

I'm in a web scripting class, and honestly and unfortunately, it has come second to my networking and design and analysis classes. Because of this I find I encounter problems that may be mundane but can't find the solution to it easily.
I am writing a CGI form that is supposed to work with a MySQL DB. I can insert and delete into the DB just fine. My problem comes when querying the DB.
My code compiles fine and I don't get errors when trying to "display" the info in the DB through the browser but the data and text doesn't in fact display. The code in question is here:
print br, 'test';
my $dbh = DBI->connect("DBI:mysql:austinc4", "*******", "*******", {RaiseError => 1} );
my $usersstatement = "select * from users";
my $projstatment = "select * from projects";
# Get the handle
my $userinfo = $dbh->query($usersstatement);
my $projinfo = $dbh->query($projstatement);
# Fetch rows
while (#userrow = $userinfo->fetchrow()) {
print $userrow[0], br;
}
print 'end';
This code is in an if statement that is surrounded by the print header, start_html, form, /form, end_html. I was just trying to debug and find out what was happening and printed the statements test and end. It prints out test but doesn't print out end. It also doesn't print out the data in my DB, which happens to come before I print out end.
What I believe I am doing is:
Connecting to my DB
Forming a string the contains the command/request to the DB
Getting a handle for my query I perform on the DB
Fetching a row from my handle
Printing the first field in the row I fetched from my table
But I don't see why my data wouldn't print out as well as the end text. I looked in DB and it does in fact contain data in the DB and the table that I am trying to get data from.
This one has got me stumped, so I appreciate any help. Thanks again. =)
Solution:
I was using a that wasn't supported by the modules I was including. This leads me to another question. How can I detect errors like this? My program does in fact compile correctly and the webpage doesn't "break". Aside from me double checking that all the methods I do use are valid, do I just see something like text not being displayed and assume that an error like this occurred?
Upon reading the comments, the reason your program is broken is because query() does not execute an SQL query. Therefore you are probably calling an undefined subroutine unless this is a wrapper you have defined elsewhere.
Here is my original posting of helpful hints, which still apply:
I hope you have use CGI, use DBI, etc... and use CGI::Carp and use strict;
Look in /var/log/apache2/access.log or error.log for the bugs
Realize that the first thing a CGI script prints MUST be a valid header or the web server and browser become unhappy and often nothing else displays.
Because of #3 print the header first BEFORE you do anything, especially before you connect to the database where the script may die or print something else because otherwise the errors or other messages will be emitted before the header.
If you still don't see an error go back to #2.
CGIs that use CGI.pm can be run from a command line in a terminal session without going through the webserver. This is also a good way to debug.