Perl MySQL Cruddy! Assistance - perl

I am fairly new to database programming and am trying to get a basic CRUD app going. Using Cruddy! I have a very limited application that reads the data dictionary and creates forms based on each table.
As several tables have extensive foreign key entries, I want my app to perform the join operations that would be necessary for each foreign key column to be displayed as the entries to which the key refers. Cruddy! claims to have this ability - it uses CGI::AutoForm for the form creation. To get a form up and running, you have to provide entries on a column-by-column basis to a reference table ui_table_column.
Rather than writing SQL statements for all of my tables and their affiliated columns, I'm trying to get the process right for a single column.
From my DDL for this table:
CONSTRAINT `fk_Holder_Sample1`
FOREIGN KEY (`sample_id`)
REFERENCES `sample` (`sample_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
And my attempts at setting up the AutoForm SQL entries:
INSERT INTO ui_table_column (
table_name, field_name, appear_order, heading, searchable, updatable, insertable, input_control_type, multi_insert_delimiter,
search_control_type, search_mult_select, use_data_dict, datatype, default_value, required, input_size, input_maxlength, brief_heading,
alt_mask_field, mask_table_name, mask_field_name, id_field_name, no_cache, radio_checkbox_cols, field_group, element_attrs, help_summary)
VALUES (
'SAMPLE', 'SAMPLE_ID', 10, 'ID', 'Y', 'N', 'N', 'TEXT', NULL,
'SELECT', 4, 'Y', NULL, NULL, NULL, NULL, NULL, NULL,
NULL, 'sample', 'name', 'sample_id', 'Y', NULL, NULL, NULL, NULL);
INSERT INTO ui_table_column (
table_name, field_name, appear_order, heading, searchable, updatable, insertable, input_control_type, multi_insert_delimiter,
search_control_type, search_mult_select, use_data_dict, datatype, default_value, required, input_size, input_maxlength, brief_heading,
alt_mask_field, mask_table_name, mask_field_name, id_field_name, no_cache, radio_checkbox_cols, field_group, element_attrs, help_summary)
VALUES (
'SAMPLE', 'SAMPLE_NAME', 20, 'Name', 'Y', 'Y', 'Y', 'TEXT', NULL,
'MATCH TEXT', NULL, 'Y', NULL, NULL, NULL, NULL, NULL, 'Name',
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO ui_table_column (
table_name, field_name, appear_order, heading, searchable, updatable, insertable, input_control_type, multi_insert_delimiter,
search_control_type, search_mult_select, use_data_dict, datatype, default_value, required, input_size, input_maxlength, brief_heading,
alt_mask_field, mask_table_name, mask_field_name, id_field_name, no_cache, radio_checkbox_cols, field_group, element_attrs, help_summary)
VALUES (
'HOLDER', 'SAMPLE_ID', 30, 'sample', 'Y', 'Y', 'Y', 'SELECT', NULL,
'SELECT', 4, 'Y', NULL, NULL, NULL, NULL, NULL, 'Sample',
NULL, 'sample', 'NAME', 'SAMPLE_ID', 'Y', NULL, NULL, NULL, NULL);
When I refresh my app page (both just refreshing the broswer and calling apachectl) there is no change - that is, I still see Sample ID as a field in the Holder page.
Has anyone had success with this or can advise me on what I'm doing wrong?
EDIT: The silence from SO I take as indicative that this particular framework has not seen widespread use. I would like to open my question up a little, then, and ask what solutions have you used? I am actually experimenting with Catalyst::Plugin::AutoCRUD.

Answered after learner concluded with another framework but for future reference these fields must be in UPPER CASE.
For the example above, the first and third insert statements would have:
(alt_mask_field, mask_table_name, mask_field_name, id_field_name) = (NULL,'SAMPLE','NAME','SAMPLE_ID').

I wound up using the module in my edit. I will flag this as closed tomorrow.

Related

TSQL - Select values with same IS

have a view like this:
Table
The record "NDocumento" is populated only in the first row of a transaction by design. These rows are grouped by the column "NMov" which is the ID.
Since this is a view, I would like to populate each empty "NDocumento" record with the corresponding value contained in the first transaction through a SELECT statement.
As you can see by the picture this is MS-SQL Server 2008, so the lack of LAG makes the game harder.
I would immensely appreciate any help,
thanks
Try this:
SELECT
T1.NDocumento
, T2.NMov
, T2.NRiga
-- , T2. Rest of the fields
FROM NDocumentoTable T1
JOIN NDocumentoTable T2 ON T2.NMov = T1.NMov
WHERE T1.NRiga = 1
I used LAG() over the partition of NMov,Causale by based on your data. You cna change the partition with your requirement. The logic is you get the previous value if the NDocument is empty for the given partition.
CREATE TABLE myTable_1
(
NMov int
,NRiga int
,CodiceAngrafica varchar(100)
,Causale varchar(100)
,DateRegistration date
,DateDocumented date
,NDocument varchar(100)
)
INSERT INTO myTable_1 VALUES (5133, 1, '', 'V05', '01/14/2021', '01/14/2021', 'VI-2100001')
,(5133, 2, '', 'V05', null, null, '')
,(5134, 1, '', 'V05', '01/14/2021', '01/14/2021', 'VI-2100002')
,(5134, 2, '', 'V05', null, null, '')
SELECT
NMov
,NRiga
,CASE WHEN ISNULL(NDocument,'') = ''
THEN LAG(NDocument) OVER (PARTITION BY NMov,Causale ORDER BY NMov)
ELSE NDocument END AS [NDocument]
FROM myTable_1

Postgres query error: syntax error at or near ","

I've seen and solved this error before but I'm really lost here as to what's wrong with my query - it seems fine to me on the surface so I'm wondering if anyone has any ideas here?
Here's the query:
INSERT INTO processing_queue (
source,
description_index,
charge_index,
charge_code_index,
charge_code_label,
cpt_code_index,
hcpcs_code_index,
ms_drg_code_index,
svccd_code_index,
ndc_code_index,
no_of_drg_discharges_col_index,
revenue_code_col_index,
department_col_index,
skip_rows,
ignore_on,
sheet_index
) VALUES (
76,
NULL,
1,
0,
'Mnemonic',
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
'{0, 1}',
'Standard Price',
0
) RETURNING *;
I'm using the node-postgres library here and just entering the above as a raw query with the client.query method.
Confirmed that the query is not the problem and is working when I try it in psql. The error is somewhere in the thin wrapper code that I have written around node-postgres. Will try to close this as the answer is probably not so useful to anyone except me.

Possible to use pandas/sqlalchemy to insert arrays into sql database? (postgres)

With the following:
engine = sqlalchemy.create_engine(url)
df = pd.DataFrame({
"eid": [1,2],
"f_i": [123, 1231],
"f_i_arr": [[123], [0]],
"f_53": ["2013/12/1","2013/12/1",],
"f_53a": [["2013/12/1"], ["2013/12/1"],],
})
with engine.connect() as con:
con.execute("""
DROP TABLE IF EXISTS public.test;
CREATE TABLE public.test
(
eid integer NOT NULL,
f_i INTEGER NULL,
f_i_arr INTEGER NULL,
f_53 DATE NULL,
f_53a DATE[] NULL,
PRIMARY KEY(eid)
);;
""")
df.to_sql("test", con, if_exists='append')
If I try to insert only column "f_53" (an date) it succeeds.
If I try to add column "f_53a" (a date[]) it fails with:
^
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) column "f_53a" is of type date[] but expression is of type text[]
LINE 1: ..._53, f_53a, f_i, f_i_arr) VALUES (1, '2013/12/1', ARRAY['201...
^
HINT: You will need to rewrite or cast the expression.
[SQL: 'INSERT INTO test (eid, f_53, f_53a, f_i, f_i_arr) VALUES (%(eid)s, %(f_53)s, %(f_53a)s, %(f_i)s, %(f_i_arr)s)'] [parameters: ({'f_53': '2013/12/1', 'f_53a': ['2013/12/1', '2013/12/1'], 'f_i_arr': [123], 'eid': 1, 'f_i': 123}, {'f_53': '2013/12/1', 'f_53a': ['2013/12/1', '2013/12/1'], 'f_i_arr': [0], 'eid': 2, 'f_i': 1231})]
I have mentioned the dtypes explicitly and it worked for me for postgres.
//sample code
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.dialects import postgresql
df.to_sql('mytable',pgConn, if_exists='append', index=False, dtype={'datetime': sqlalchemy.TIMESTAMP(), 'cur_c':postgresql.ARRAY(sqlalchemy.types.REAL),
'volt_c':postgresql.ARRAY(sqlalchemy.types.REAL)
})
Yes -- is possible to insert [] and [][] types from a dataframe into postgres form a dataframe.
Unlike flat DATE types, which are may be correctly parsed by sql, DATE[] and DATE[][] need to be converted to datetime objects first. Like so.
with engine.connect() as con:
con.execute("""
DROP TABLE IF EXISTS public.test;
CREATE TABLE public.test
(
eid integer NOT NULL,
f_i INTEGER NULL,
f_ia INTEGER[] NULL,
f_iaa INTEGER[][] NULL,
f_d DATE NULL,
f_da DATE[] NULL,
f_daa DATE[][] NULL,
PRIMARY KEY(eid)
);
""")
d = pd.to_datetime("2013/12/1")
i = 99
df = pd.DataFrame({
"eid": [1,2],
"f_i": [i,i],
"f_ia": [None, [i,i]],
"f_iaa": [[[i,i],[i,i]], None],
"f_d": [d,d],
"f_da": [[d,d],None],
"f_daa": [[[d,d],[d,d]],None],
})
df.to_sql("test", con, if_exists='append', index=None)

PostgreSQL ERROR: INSERT has more target columns than expressions, when it doesn't

So I'm starting with this...
SELECT * FROM parts_finishing;
...I get this...
id, id_part, id_finish, id_metal, id_description, date,
inside_hours_k, inside_rate, outside_material
(0 rows)
...so everything looks fine so far so I do this...
INSERT INTO parts_finishing
(
id_part, id_finish, id_metal, id_description,
date, inside_hours_k, inside_rate, outside_material
) VALUES (
('1013', '6', '30', '1', NOW(), '0', '0', '22.43'),
('1013', '6', '30', '2', NOW(), '0', '0', '32.45'));
...and I get...
ERROR: INSERT has more target columns than expressions
Now I've done a few things like ensuring numbers aren't in quotes, are in quotes (would love a table guide to that in regards to integers, numeric types, etc) after I obviously counted the number of column names and values being inserted. I also tried making sure that all the commas are commas...really at a loss here. There are no other columns except for id which is the bigserial primary key.
Remove the extra () :
INSERT INTO parts_finishing
(
id_part, id_finish, id_metal, id_description,
date, inside_hours_k, inside_rate, outside_material
) VALUES
('1013', '6', '30', '1', NOW(), '0', '0', '22.43')
, ('1013', '6', '30', '2', NOW(), '0', '0', '32.45')
;
the (..., ...) in Postgres is the syntax for a tuple literal; The extra set of ( ) would create a tuple of tuples, which makes no sense.
Also: for numeric literals you don't want the quotes:
(1013, 6, 30, 1, NOW(), 0, 0, 22.43)
, ...
, assuming all these types are numerical.
I had a similar problem when using SQL string composition with psycopg2 in Python, but the problem was slightly different. I was missing a comma after one of the fields.
INSERT INTO parts_finishing
(id_part, id_finish, id_metal)
VALUES (
%(id_part)s <-------------------- missing comma
%(id_finish)s,
%(id_metal)s
);
This caused psycopg2 to yield this error:
ERROR: INSERT has more target columns than expressions.
This happened to me in a large insert, everything was ok (comma-wise), it took me a while to notice I was inserting in the wrong table of course the DB does not know your intentions.
Copy-paste is the root of all evil ... :-)
I faced the same issue as well.It will be raised, when the count of columns given and column values given is mismatched.
I have the same error on express js with PostgreSQL
I Solved it. This is my answer.
error fire at the time of inserting record.
error occurred due to invalid column name with values passing
error: INSERT has more target columns than expressions
ERROR : error: INSERT has more target columns than expressions
name: 'error',
length: 116,
severity: 'ERROR',
code: '42601',
detail: undefined,
hint: undefined,
position: '294',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'analyze.c',
line: '945',
here is my code dome
INSERT INTO student(
first_name, last_name, email, phone
)
VALUES
($1, $2, $3, $4),
values
: [ first_name,
last_name,
email,
phone ]
IN my case there was syntax error in sub query.

DBIx::class find function returns hash when value expected

Database schema:
create table requests(
rid integer primary key autoincrement,
oid integer references orders (oid),
command varchar(5),
account varchar(50),
txn_id varchar(12),
md5 varchar(30),
txn_date varchar(14),
sum float,
regdt timestamp default current_timestamp,
constraint u1 unique (command,txn_id)
);
create table orders (
oid integer primary key autoincrement,
cid integer references customers (cid),
pid integer references providers (pid),
account varchar(50),
amount float
);
Mapped to code by DBIx::Class::Schema::Loader.
My controller code :
my $req= $schema->resultset('Request');
my $order= $schema->resultset('Order');
my $r= $req->find(
{ command => 'check',
txn_id => $txn_id,
},
{ key => 'command_txn_id_unique' }
);
my $oid=$r->oid;
$req->create(
{ command => $command,
account => $account,
txn_id => $txn_id,
md5 => $md5,
txn_date => $txn_date,
sum => $sum,
oid => $oid
}
);
my $o = $order->find($oid);
$o->sum($sum);
$o->update;
My tracert sqls with DBIC_TRACE=1
SELECT me.rid, me.oid, me.command, me.account, me.txn_id, me.md5, me.txn_date, me.sum, me.regdt FROM requests me
WHERE ( ( me.command = ? AND me.txn_id = ? ) ): 'check', '1358505665'
SELECT me.oid, me.cid, me.pid, me.account, me.amount FROM orders me
WHERE ( me.oid = ? ): '18'
INSERT INTO requests ( account, command, md5, oid, sum, txn_date, txn_id) VALUES ( ?, ?, ?, ?, ?, ?, ? ): '1', 'pay', '44F4BC73D17E3FA906F658BB5916B7DC', '18', '500', '20130118104122', '1358505665'
DBIx::Class::Storage::DBI::SQLite::_dbi_attrs_for_bind(): Non-integer value supplied for column 'me.oid' despite the integer datatype at /home/.../lib/TestPrj/Test.pm line 128
SELECT me.oid, me.cid, me.pid, me.account, me.amount FROM orders me WHERE ( me.oid = ? ): 'TestPrj::Model::Result::Order=HASH(0x3dea448)'
datatype mismatch: bind param (0) =HASH(0x3dea448) as integer at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 1765.
I don't understand :
Why in first select query $oid = 18 and its ok. Its really 18.
But in second select query this $oid is a HASH ? Its not redefined anywhere in my code.
UPD:
Using Data::Dumper by advice #akawhy I saw that $oid is a blessed HASH.
So, looks like there is different context
scalar in { ... oid => $oid .. .}
and hash in find(...).
I don't know why it not a hash in first case.
But, when I changed to $oid=$r->get_column($oid) all works fine.
$resultset->find returns a Result object.
You didn't paste your ResultSource classes but as you wrote that you generated them with Schema::Loader I assume the 'oid' column accessor method is the same as the 'oid' relationship accessor. In that case it returns different things depending on scalar vs. list context.
I suggest to rename your relationship so you have two different accessors for the column value and its relationship.
I prefix all relationships with 'rel_' but you might prefer a different naming standard.
I think my $oid=$r->oid; the $oid is a ref. you can use ref or Data::Dumper to see its type.
Maybe you should use $oid->oid
And pay attention to this error
DBIx::Class::Storage::DBI::SQLite::_dbi_attrs_for_bind(): Non-......