Stop Zend_Db from quoting Sybase BIT datatype field values - zend-framework

I'm using Pdo_Mssql adapter against a Sybase database and working around issues encountered. One pesky issue remaining is Zend_Db's instance on quoting BIT field values. When running the following for an insert:
$row = $this->createRow();
...
$row->MyBitField = $data['MyBitField'];
...
$row->save();
FreeTDS log output shows:
dbutil.c:87:msgno 257: "Implicit conversion from datatype 'VARCHAR' to 'BIT' is not allowed. Use the CONVERT function to run this query.
I've tried casting values as int and bool, but this seems to be a table metadata problem, not a data type problem with input.
Fortunately, Zend_Db_Expr works nicely. The following works, but I'd like to be database server agnostic.
$row->MyBitField = new Zend_Db_Expr("CONVERT(BIT, {$data['MyBitField']})");
I've verified that the describeTable() is returning BIT for the field. Any ideas on how to get ZF to stop quoting MS SQL/Sybase BIT fields?

You can simply try this (works for mysql bit type):
$row->MyBitField = new Zend_Db_Expr($data['MyBitField']);

Related

operator does not exist: # timestamp without time zone

In a parameterized query issued from c# code to PostgreSQL 10.14 via dotConnect 7.7.832 .NET connector, I select either a parameter value or the local timestamp, if the parameter is NULL:
using (var cmd = new PgSqlCommand("select COALESCE(#eventTime, LOCALTIMESTAMP)", connection)
When executed, this statement throws the error in subject. If I comment out the corresponding parameter
cmd.Parameters.Add("#eventTime", PgSqlType.TimeStamp).Value = DateTime.Now;
and hardcode
using (var cmd = new PgSqlCommand("select COALESCE('11/6/2020 2:36:58 PM', LOCALTIMESTAMP)", connection)
or if I cast the parameter
using (var cmd = new PgSqlCommand("select COALESCE(cast(#eventTime as timestamp without time zone), LOCALTIMESTAMP)", connection)
then it works. Can anyone explain what # operator in the error is referring to and why the error?
In the case that doesn't work, your .Net connection library seems to be passing an SQL command containing a literal # to the database, rather than substituting it. The database assumes you are trying to use # as a user defined operator, as it doesn't know what else it could possibly be. But no such operator has been defined.
Why is it doing that? I have no idea. That is a question about your .Net connection library, not about PostgreSQL itself, so you might want to add tag.
The error message you get from the database should include the text of the query it received (as opposed to the text you think it was sent) and it is often useful to see that in situations like this. If that text is not present in the client's error message (some connection libraries do not faithfully pass this info along) you should be able to pull it directly from the PostgreSQL server's log file.

Input string was not in correct formate while inserting data to postgresql through entityframework in .net core using dynamic query

I am getting error while inserting data to pgsql with .net core entity framework
error is Input string was not in correct format
this is my query executing
INSERT INTO public."MedQuantityVerification"("Id","MedId","ActivityBy","ActivityOn","Quantity","ActivityType","SupposedOn","Note") Values(7773866,248953,8887,'7/14/2018 10:43:43 PM','42.5 qty',5,NULL,'I counted forty two {point} five.')
anyhow when I run that query directly to postgresql browser it works fine
looks like issue on c# side it is but not know what?
also issue is with {point}
this is how I executing the dynamic query
db.Database.ExecuteSqlRaw(query);
You have to escape the curly brackets:
{point} should be {{point}}
ExecuteSqlRaw utilizes curly braces to parameterize the raw query so if your query naturally includes them like OP's does the function is going to try and parse them. Doubling up the braces like in Koen Schepens' answer acts as an escape sequence and tells the function not to parse it as a parameter.
The documentation for the function uses the following example as to the purpose of why it does what it does:
var userSuppliedSearchTerm = ".NET";
context.Database.ExecuteSqlRaw("UPDATE Blogs SET Rank = 50 WHERE Name = {0}", userSuppliedSearchTerm);
Note that you'll want to use this to your advantage any time you're accepting user-input and passing it to ExecuteSqlRaw. If the curly brace is in a parameter instead of the main string it doesn't need to be escaped.

How to insert similar value into multiple locations of a psycopg2 query statement using dict? [duplicate]

I have a Python script that runs a pgSQL file through SQLAlchemy's connection.execute function. Here's the block of code in Python:
results = pg_conn.execute(sql_cmd, beg_date = datetime.date(2015,4,1), end_date = datetime.date(2015,4,30))
And here's one of the areas where the variable gets inputted in my SQL:
WHERE
( dv.date >= %(beg_date)s AND
dv.date <= %(end_date)s)
When I run this, I get a cryptic python error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) argument formats can't be mixed
…followed by a huge dump of the offending SQL query. I've run this exact code with the same variable convention before. Why isn't it working this time?
I encountered a similar issue as Nikhil. I have a query with LIKE clauses which worked until I modified it to include a bind variable, at which point I received the following error:
DatabaseError: Execution failed on sql '...': argument formats can't be mixed
The solution is not to give up on the LIKE clause. That would be pretty crazy if psycopg2 simply didn't permit LIKE clauses. Rather, we can escape the literal % with %%. For example, the following query:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%';
would need to be modified to:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%%';
More details in the pscopg2 docs: http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
As it turned out, I had used a SQL LIKE operator in the new SQL query, and the % operand was messing with Python's escaping capability. For instance:
dv.device LIKE 'iPhone%' or
dv.device LIKE '%Phone'
Another answer offered a way to un-escape and re-escape, which I felt would add unnecessary complexity to otherwise simple code. Instead, I used pgSQL's ability to handle regex to modify the SQL query itself. This changed the above portion of the query to:
dv.device ~ E'iPhone.*' or
dv.device ~ E'.*Phone$'
So for others: you may need to change your LIKE operators to regex '~' to get it to work. Just remember that it'll be WAY slower for large queries. (More info here.)
For me it's turn out I have % in sql comment
/* Any future change in the testing size will not require
a change here... even if we do a 100% test
*/
This works fine:
/* Any future change in the testing size will not require
a change here... even if we do a 100pct test
*/

How to save an IP address as binary using Eloquent and PostgreSQL?

First off, here's the SO question+answer where I got my information - laravel 4 saving ip address to model.
So my table will potentially have millions of row, therefore to keep storage low I opted for option 2 - using the Schema builder's binary() column and converting/storing IPs as binary with the help of Eloquents' accessors/mutators.
Here's my table:
Schema::create('logs', function ( Blueprint $table ) {
$table->increments('id');
$table->binary('ip_address'); // postgresql reports this column as BYTEA
$table->text('route');
$table->text('user_agent');
$table->timestamp('created_at');
});
The first problem I ran into was saving the IP address. I set an accessor/mutator on my model to convert the IP string into binary using inet_pton() and inet_ntop(). Example:
public function getIpAddressAttribute( $ip )
{
return inet_ntop( $ip );
}
public function setIpAddressAttribute( $ip )
{
$this->attributes['ip_address'] = inet_pton( $ip );
}
Trying to save an IP address resulted in the whole request failing - nginx would just return a 502 bad gateway error.
OK. So I figured it had to be something with Eloquent/PostgreSQL not playing well together while passing the binary data.
I did some searching and found the pg_escape_bytea() and pg_unescape_bytea() functions. I updated my model as follows:
public function getIpAddressAttribute( $ip )
{
return inet_ntop(pg_unescape_bytea( $ip ));
}
public function setIpAddressAttribute( $ip )
{
$this->attributes['ip_address'] = pg_escape_bytea(inet_pton( $ip ));
}
Now, I'm able to save an IP address without a hitch (at least, it doesn't throw any errors).
The new problem I'm experiencing is when I try to retrieve and display the IP. pg_unescape_bytea() fails with pg_unescape_bytea() expects parameter 1 to be string, resource given.
Odd. So I dd() $ip in the accessor, the result is resource(4, stream). Is that expected? Or is Eloquent having trouble working with the column type?
I did some more searching and found it's possible that pg_unescape_bytea() is not properly unescaping the data - https://bugs.php.net/bug.php?id=45964.
After much headbanging and hairpulling, it became apparent that I might be approaching this problem from the wrong direction, and need some fresh perspective.
So, what am I doing wrong? Should I be using Postgres' BIT VARYING instead of BYTEA by altering the column type --
DB::statement("ALTER TABLE logs ALTER COLUMN ip_address TYPE BIT VARYING(16) USING CAST(ip_address AS BIT VARYING(16))");`
-- Or am I merely misusing pg_escape_bytea / pg_unescape_bytea?
All help is appreciated!
Like already said in the comments to your question: in your specific case you should use the corresponding PostgreSQL data type and handling will be much easier. Compared to MySQL you will have a lot of other types in PostgreSQL (like JSON), there is a PostgreSQL data type overview page for further reference.
That said, other people could stumble upon a similar problem with bytea fields. The reason why you got Resource instead of string was that PostgreSQL treats bytea fields as streams. A very naïve approach would be to first get the stream and then to return the data:
public function getDataAttribute($value)
{
// This will kill your server under high load with large values.
$data = fgets($value);
return pg_unescape_bytea($data);
}
You can imagine that this could be a problem where multiple people try to get big files (currently hundreds of MiB or a couple of GiB) where large data objects would need a lot of memory on the server (this could even get a problem on mobile devices without swap). In this case you should work with streams on the server and the client and just fetch the data on the client you really need.

How do I map a fixed length string to Postgresql using GORM?

I have a 9 character string I want to store in Postgresql (9.3) as character(9). Just as important, I want it to validate correctly if the database is correct, which is not happening. I am generating the schema using the Database Migration plugin, which uses Hibernate tools. Here's what I've tried so far:
static mapping = {
poleID column: "pole_id", sqlType: "character", length: 9
}
That is exactly how it is created in the DB, however when I do a dbm-gorm-diff, it tries to modify the column thusly:
changeSet(author: "phil (generated)", id: "1401652394765-3") {
modifyDataType(columnName: "pole_id", newDataType: "character", tableName: "unit")
}
You can see length is ignored. I also tried specifying it with:
static mapping = {
poleID column: "pole_id", sqlType: "character(9)"
}
Again, it tries to modify that column, which is already correct, to character(9). How do I specify the mapping so it sees that the DB is already correct?
I just had a similar problem with MySQL and grails 2.3.7 - I tried to use
sqltype: 'char', length: 128,
but the length was ignored, schema-export showed it as 'char'. In MySQL, char vs. varchar matters, so I worked around it by specifying
sqltype: 'char(128)'
I think it's simply a bug in Grails - unless expressly someone wanted this NOT to work (but then it should be better documented).