unable to connect to multiple database using Dancer::Plugin::Database - perl

I am using Dancer::Plugin::Database to connect with database from my dancer application. It works fine for single connection. When i tried for multiple connection i got error. How can i add multiple connection.
I added the following code in my config.yml file:
plugins:
Database:
connections:
one:
driver: 'mysql'
database: 'employeedetails'
host: 'localhost'
port: 3306
username: 'remya'
password: 'remy#'
connection_check_threshold: 10
dbi_params:
RaiseError: 1
AutoCommit: 1
on_connect_do: ["SET NAMES 'utf8'", "SET CHARACTER SET 'utf8'" ]
log_queries: 1
two:
driver: 'mysql'
database: 'employeetree'
host: 'localhost'
port: 3306
username: 'remya'
password: 'remy#'
connection_check_threshold: 10
dbi_params:
RaiseError: 1
AutoCommit: 1
on_connect_do: ["SET NAMES 'utf8'", "SET CHARACTER SET 'utf8'" ]
log_queries: 1
Then i tried to connect with database using the following code :
my $dbh=database('one');
my $sth=$dbh->prepare("select * from table_name where id=?");
$sth->execute(1);
I got compilation error, "Unable to parse Configuration file"
Please suggest a solution.
Thanks in advance

YAML requires consistent indentation for the keys of a hash. Remove four spaces from before "two:" and it should parse.
Update: I see there's been some editing of indentation; going back to the original question produces a parsing error in a different place and shows a mixture of tabs and spaces being used; try to consistently use only tabs or only spaces. You can test your file and find what line is producing the first error like so:
$ perl -we'use YAML::Syck; LoadFile "config.yml"'
Syck parser (line 19, column 16): syntax error at -e line 1, <> chunk 1.
Also make sure that your keys are all ending up in the right hash (the mixture of tabs and spaces seems to allow this coming out wrong but still parsing successfully) with:
perl -we'use YAML::Syck; use Data::Dumper; $Data::Dumper::Sortkeys=$Data::Dumper::Useqq=1; print Dumper LoadFile "config.yml"'

Related

Problems encountered in recovering three tables from a dump file

I'm trying to restore tables from a dump file. It's illustrated by a footnote in the paper "VCCFinder-Finding Potential Vulnerabilities in Open-Source Projects to Assist Code Audits", that the dump file that the team created with pg_dump could be read with pg_restore. As it's shown in paper footnote with red line to emphasize. That's where I've started.
1. Use pg_restore command
By typing the command mentioned in your paper: VCCFinder: Finding Potential Vulnerabilities in Open-Source Projects to Assist Code Audits:
pg_restore -f vcc_base I:\OneDrive\PractiseProject\x_prjs\m_firmware_scan\m_firmware_scan.ref\vcc-database\vccfinder-database.dump
Windows CMD had returned an error message:
pg_restore: error: input file appears to be a text format dump. Please use psql.
As I had tried the operation in different version, including v14.4, v9.6, v9.4 and v9.3, the outcome is the same error message.
2.Use psql command
Then I turned to another direction: using psql. After typing command,
psql -v ON_ERROR_STOP=1 -U postgres < I:\OneDrive\PractiseProject\x_prjs\m_firmware_scan\m_firmware_scan.ref\vcc-database\vccfinder-database.dump
apart from postgreSQL 14.4 environment, the returned error message is:
psql: SCRAM authentication requires libpq version 10 or above
Under postgreSQL 14.4 environment, the returned message became:
SET
SET
SET
SET
SET
SET
ERROR: schema "export" already exists
If I remove the -v ON_ERROR_STOP=1 option, and returned message would be like this:
SET
SET
SET
SET
SET
SET
ERROR: schema "export" already exists
SET
SET
SET
ERROR: type "public.hstore" does not exist
LINE 27: patch_keywords public.hstore
^
ERROR: relation "cves" already exists
ERROR: relation "repositories" already exists
ERROR: relation "commits" does not exist
invalid command \n
invalid command \N
invalid command \N
...
(Solved) I have tried to solve the unreadable code problem shown in above error messages by typing chcp 65001, chcp 437 and etc to change character set into UTF8 or American English in Windows CMD, but it's not helpful. But after viewing the source code of the dump file in Visual Studio, it's not difficult to infer that those error messages were caused by psql commands in the dump file.
After the error messages became understandable, I focused on one particular error message:
ERROR: type "public.hstore" does not exist
LINE 27: patch_keywords public.hstore
So I manually created a "hstore" type below the "pulic SCHEMA", after that error messages turned into these:
SET
SET
SET
SET
SET
SET
SET
ERROR: schema "export" already exists
SET
SET
SET
ERROR: relation "commits" already exists
ERROR: relation "cves" already exists
ERROR: relation "repositories" already exists
ERROR: malformed record literal: ""do"=>"1", "if"=>"0", "asm"=>"41", "for"=>"5", "int"=>"13", "new"=>"0", "try"=>"0", "auto"=>"0", "bool"=>"0", "case"=>"0", "char"=>"1", "else"=>"0", "enum"=>"0", "free"=>"0", "goto"=>"0", "long"=>"15", "this"=>"0", "true"=>"0", "void"=>"49", "alloc"=>"0", "break"=>"0", "catch"=>"0", "class"=>"0", "const"=>"0", "false"=>"0", "float"=>"0", "short"=>"0", "throw"=>"0", "union"=>"0", "using"=>"0", "while"=>"1", "alloca"=>"0", "calloc"=>"0", "delete"=>"0", "double"=>"0", "extern"=>"4", "friend"=>"0", "inline"=>"18", "malloc"=>"0", "public"=>"0", "return"=>"4", "signed"=>"1", "sizeof"=>"0", "static"=>"32", "struct"=>"4", "switch"=>"0", "typeid"=>"0", "default"=>"0", "mutable"=>"0", "private"=>"0", "realloc"=>"0", "typedef"=>"0", "virtual"=>"0", "wchar_t"=>"0", "continue"=>"0", "explicit"=>"0", "operator"=>"0", "register"=>"0", "template"=>"0", "typename"=>"0", "unsigned"=>"23", "volatile"=>"23", "namespace"=>"0", "protected"=>"0", "const_cast"=>"0", "static_cast"=>"0", "dynamic_cast"=>"0", "reinterpret_cast"=>"0""
DETAIL: Missing left parenthesis.
CONTEXT: COPY commits, line 1, column patch_keywords: ""do"=>"1", "if"=>"0", "asm"=>"41", "for"=>"5", "int"=>"13", "new"=>"0", "try"=>"0", "auto"=>"0", "bo..."
ERROR: syntax error at or near "l022_save"
LINE 1: l022_save, pl022_load, s);
^
invalid command \n
invalid command \N
invalid command \N
...
Now the three tables have been created, but there is no content in them.
3. Install hstore
After searching for "hstore"hstore type does not exist with hstore installed postgresql, I realized that the "hstore" should be installed, but not be manually created. So I typed this in psql command line:
postgres=# create EXTENSION hstore; And there were new error messages:
SET
SET
SET
SET
SET
SET
SET
ERROR: schema "export" already exists
SET
SET
SET
CREATE TABLE
ERROR: relation "cves" already exists
ERROR: relation "repositories" already exists
ERROR: missing data for column "hunk_count"
CONTEXT: COPY commits, line 23201: "11388700 178 \N other_commit 1d6198c3b01619151f3227c6461b3d53eeb711e5\N blueswir1#c046a42c-6fe2-441..."
ERROR: syntax error at or near "l022_save"
LINE 1: l022_save, pl022_load, s);
^
invalid command \n
invalid command \N
invalid command \N
...
And still, there is no content in those three tables.
4. Generate and view tables
After looking into the source code of the dump file, and trying to fix the "hunk_count" problem but end up with failure. It occurs to me that the above error messages just caused by one paticular row of code. So I had deleted the row and the old error messages were gone but there were new error messages caused by another row. Evetually I have deleted 10 rows in total, comparing to the total row number: 351409, those deleted parts are negligible. And three tables weren't empty anymore, as it's shown in pgAdmin 4.
However, the pgADmin only demonstrated the structure of those tables, I still didn't know how to view the content in them. By refering to 2 Ways to View the Structure of a Table in PostgreSQL, I typed
SELECT
*
FROM
export.repositories/ export.cves/ export.commits
WHERE
TRUE
to generate and view corresponding tables in pgAdmin 4. For example, final cve table:
5. In the end
Looking back at these steps, these are all easy steps, but for a guy who was not familiar with the tools or operations, it could cost several days to search and type, step by step for one simple purpose. I wish this post could be useful to someone like me.
However, I am not so familiar with psql commands or anything about postgreSQL, as a matter of fact, I had never used them before. So I'm wondering if someone could point out some mistakes I may have made in those attempts, or provide some suggestions for my dilemma.
First , ensure your dump format.
Try to read header (first 5 chars) of dump file.
If it is signed as PGDMP then it is binary/custom dump else it is sql (human readable format).
- use pg_restore for binary dump import.
$ pg_restore -U postgres -d <dbname> file.dump
- use psql to import plain text sql dump.
$ psql -U postgres -d <dbname> < file.dump
Solved, as I've demonstrated above.

q - cannot load log4q

I would like to use log4q. I downloaded the log4q.q file to my %QHOME% directory. When I try to load the script
C:\Dev\q\w32\q.exe -p 5000
q) \l log4q.q
I get
'
[0] (<load>)
)
When I try the same in qpad after connecting to localhost server I get
'.log4.q
(attempt to use variable .log4.q without defining/assigning first (or user-defined signal))
which I find strange because I can switch to non-existing namespaces in the console without any issues.
Thanks for the help!
It looks like a typo in the first line stemming from a recent change of namespace from .l to .log4q
I think the first line should be:
\d .log4q
not
\d .log4.q

ansible - how to check success of importing sql file into postgres

I'm trying to crash course myself on ansible... and I've run into a scenario where I need to import a file into postgres. The postgres module for ansible doesn't have all the commands that the mysql module does... so I've had to find an alternative way to run sql commands against the db.
I'm using the shell command. However, I don't know how to check if the shell command was successful or not.
Here's what my playbook looks like so far:
- hosts: webservers
tasks:
- block:
- debug: msg='Start sql insert play...'
- copy: src=file.dmp dest=/tmp/file.dmp
- debug: msg='executing sql file...'
- shell: psql -U widgets widgets < /tmp/file.dmp
- debug: msg='all is well'
when: result|succeeded
rescue:
- debug: msg='Error'
always:
- debug: msg='End of Play'
# - block:
# - name: restart secondarywebservers
# - debug: msg='Attempting to restart secondary servers'
# - hosts: websecondaries
What I ultimately want to do is start the second block of code only when the first block has been successful. For now, just to learn how conditionals work, I'm trying to see if I can print a message to screen when i know for sure the sql file executed.
It fails with the following error message:
TASK [debug] *******************************************************************
fatal: [10.1.1.109]: FAILED! => {"failed": true, "msg": "The conditional check 'result|succeeded' failed. The error was: |failed expects a dictionary\n\nThe error appears to have been in '/etc/ansible/playbooks/dbupdate.yml': line 9, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - shell: psql -U openser openser < /tmp/file.dmp\n - debug: msg='all is well'\n ^ here\n"}
I'm still doing some research on how conditionals work... so it could be that my syntax is just wrong. But the bigger question is this - Since there is no native ansible / postgresql import command, I realize that there's no way for ansible to know that the commands in "file.dmp" really created my database records...
I could add a select statement in file.dmp to try to select the record I just created... but how do i pass that back to ansible?
Just wondering if someone has some ideas on how I could accomplish something like this.
Thanks.
EDIT 1
Here's what the contents of the test "file.dmp" contains:
INSERT INTO mytable VALUES (DEFAULT, 64, 1, 1, 'test', 0, '^(.*)$', '\1);
EDIT 2
I'm going to try to do something like this:
Copy (Select * From mytable where ppid = 64) To '/tmp/test.csv' With CSV;
after the insert statements... and then have ansible check for this file (possibly using the lineinfile command) as a way to prove to myself that the insert worked.
You just need to register that result variable.
Changing your play to look like this should work:
- hosts: webservers
tasks:
- block:
- debug: msg='Start sql insert play...'
- copy: src=file.dmp dest=/tmp/file.dmp
- debug: msg='executing sql file...'
- shell: psql -U widgets widgets < /tmp/file.dmp
register: result
- debug: msg='all is well'
when: result|succeeded
rescue:
- debug: msg='Error'
always:
- debug: msg='End of Play'

Using ftp-deploy with load-grunt-tasks causes errors

I've split my Gruntfile into multiple files using load-grunt-tasks but seem to get an error when using ftp-deploy. I've tried some different things, but I reason that the hyphen (-) in the "ftp-deploy" might cause problems.
I'm getting the following error:
Running "ftp-deploy:theme" (ftp-deploy) task
Verifying property ftp-deploy.theme exists in config...ERROR
>> Unable to process task.
Warning: Required config property "ftp-deploy.theme" missing. Use --force to continue.
When running:
grunt "ftp-deploy:theme" --verbose
My ftp-deploy script looks as follows:
# FTP DEPLOY
# -------
module.exports =
'ftp-deploy':
theme:
auth:
host: 'host.com'
port: 21
authKey: 'key'
src: 'drupal/sites/all/themes/theme_name'
dest: 'www/host.com/sites/all/themes/theme_name'
I've tried running it without incapsulating it inside the "theme:" object which works, but is essentially not what I want to do as I have different folders I want to transfer.
Any ideas to what a solution might be?
I found the answer myself.
'ftp-deploy':
Should of course not be included in the file.

unicode characters in a Smalltalk FFI call to OpenDBX

I need to insert some strings containing non ASCII characters into the database (Postgress). Here is the minimal example. I get the "Could not coerce arguments" on <cdecl: long 'odbx_query' (ulong char* ulong) module: 'opendbx'>. From what I understand it is a FFI error and the call didn't even make it to the database backend, but I'm not sure.
| conn settings sql |
settings := DBXConnectionSettings
host: 'host.com'
port: '5432'
database: 'grss'
userName: 'username'
userPassword: 'password'.
conn := DBXConnection platform: DBXPostgresPlatform new settings: settings.
conn connectAndOpen.
sql := 'select ''', (WideString fromPacked: 269), ''' from dual'.
conn execute: sql.
conn close.
conn disconnect.
I think I had the same problem. One should encode data using same encoding than the server. Currently you should be able to specify encoding in following way:
settings := DBXConnectionSettings
host: 'host.com'
port: '5432'
database: 'grss'
userName: 'username'
userPassword: 'password';
encodingStrategy: (DBXStaticEncoding newForEncoding: #utf8).
If encoding is not known one could use DBXAutomaticEncoding instead of DBXStaticEncoding.
This should work on postgresql database.
THe problem seems to be the WideString. It seems that FFI cannot convert from WideString instances to C char*
Can you use normal ByteString instead of the wide? maybe FFI could be fixed so that it can do it?
I still don't know how to answer to someone here in stackoverflow. Anyway, what Panu says should work:
settings := DBXConnectionSettings
host: 'host.com'
port: '5432'
database: 'grss'
userName: 'username'
userPassword: 'password';
encodingStrategy: (DBXStaticEncoding newForEncoding: #utf8).
without needing to needing to direclty use UTF8TextConverter. That's the way to do it with SqueakDBX. And it has nothing to do with GlorpDBX, it is just plain SqueakDBX. If the latest ConfigurationOfSqueakDBX is not updated, just update to the latest versions using Monticello Browser.
FFI's char* wants a ByteString. Maybe postgres can use UTF-8 directly? If so, you just would have to say squeakToUtf8.
Fixed by using
UTF8TextConverter >> convertToSystemString
and
UTF8TextConverter >> convertFromSystemString