operator does not exist: # timestamp without time zone - postgresql

In a parameterized query issued from c# code to PostgreSQL 10.14 via dotConnect 7.7.832 .NET connector, I select either a parameter value or the local timestamp, if the parameter is NULL:
using (var cmd = new PgSqlCommand("select COALESCE(#eventTime, LOCALTIMESTAMP)", connection)
When executed, this statement throws the error in subject. If I comment out the corresponding parameter
cmd.Parameters.Add("#eventTime", PgSqlType.TimeStamp).Value = DateTime.Now;
and hardcode
using (var cmd = new PgSqlCommand("select COALESCE('11/6/2020 2:36:58 PM', LOCALTIMESTAMP)", connection)
or if I cast the parameter
using (var cmd = new PgSqlCommand("select COALESCE(cast(#eventTime as timestamp without time zone), LOCALTIMESTAMP)", connection)
then it works. Can anyone explain what # operator in the error is referring to and why the error?

In the case that doesn't work, your .Net connection library seems to be passing an SQL command containing a literal # to the database, rather than substituting it. The database assumes you are trying to use # as a user defined operator, as it doesn't know what else it could possibly be. But no such operator has been defined.
Why is it doing that? I have no idea. That is a question about your .Net connection library, not about PostgreSQL itself, so you might want to add tag.
The error message you get from the database should include the text of the query it received (as opposed to the text you think it was sent) and it is often useful to see that in situations like this. If that text is not present in the client's error message (some connection libraries do not faithfully pass this info along) you should be able to pull it directly from the PostgreSQL server's log file.

Related

Why are identical SQL calls behaving differently?

I'm working on a web app in Rust. I'm using Tokio Postgres, Rocket and Tera (this may be relevant).
I'm using the following to connect to my DB which doesn't fail in either case.
(sql_cli, connection) = match tokio_postgres::connect("postgresql://postgres:*my_password*#localhost:8127/*AppName*", NoTls).await{
Ok((sql_cli, connection)) => (sql_cli, connection),
Err(e) => return Err(Redirect::to(uri!(error_display(MyError::new("Failed to make SQLClient").details)))),
};
My query is as follows. I keep my queries in a separate file (I'm self taught and find that easier).
let query= sql_cli.query(mycharactersquery::get_characters(user_id).as_str(), &[]).await.unwrap();
The get characters is as follows. It takes a user ID and should return the characters that they have made in the past.
pub fn get_characters(user_id: i16) -> String {
format!("SELECT * FROM player_characters WHERE user_id = {} ORDER BY char_id ASC;", user_id)
}
In my main file, I have one GET which is /mycharacters/<user_id> which works. This GET returns an HTML file. I have another GET which is /<user_id> which returns a Tera template. The first works fine and loads the characters, the second doesn't: it just loads indefinitely. I initially thought this was to do my lack of familiarity with Tera.
After some troubleshooting, I put some printouts in my code, the one before and after the SQL call work in /mycharacters/<user_id>, but only the one before writes to the terminal in /<user_id>. This makes me think that Tera isn't the issue as it isn't making it past the SQL call.
I've found exactly where it is going wrong, but I don't know why as it isn't giving an error.
Could someone please let me know if there is something obvious that I am missing or provide some assistance?
P.S. The database only has 3 columns, so an actual timeout isn't the cause.
I expected both of these SQL calls to function as I am connected to my database properly and the call is copied from the working call.

Setting application_name on Postgres/SQLAlchemy

Looking at the output of select * from pg_stat_activity;, I see a column called application_name, described here.
I see psql sets this value correctly (to psql...), but my application code (psycopg2/SQLAlchemy) leaves it blank.
I'd like to set this to something useful, like web.1, web.2, etc, so I could later on correlate what I see in pg_stat_activity with what I see in my application logs.
I couldn't find how to set this field using SQLAlchemy (and if push comes to shove - even with raw sql; I'm using PostgresSQL 9.1.7 on Heroku, if that matters).
Am I missing something obvious?
the answer to this is a combination of:
http://initd.org/psycopg/docs/module.html#psycopg2.connect
Any other connection parameter supported by the client library/server can be passed either in the connection string or as keywords. The PostgreSQL documentation contains the complete list of the supported parameters. Also note that the same parameters can be passed to the client library using environment variables.
where the variable we need is:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-APPLICATION-NAME
The application_name can be any string of less than NAMEDATALEN characters (64 characters in a standard build). It is typically set by an application upon connection to the server. The name will be displayed in the pg_stat_activity view and included in CSV log entries. It can also be included in regular log entries via the log_line_prefix parameter. Only printable ASCII characters may be used in the application_name value. Other characters will be replaced with question marks (?).
combined with :
http://docs.sqlalchemy.org/en/rel_0_8/core/engines.html#custom-dbapi-args
String-based arguments can be passed directly from the URL string as query arguments: (example...) create_engine() also takes an argument connect_args which is an additional dictionary that will be passed to connect(). This can be used when arguments of a type other than string are required, and SQLAlchemy’s database connector has no type conversion logic present for that parameter
from that we get:
e = create_engine("postgresql://scott:tiger#localhost/test?application_name=myapp")
or:
e = create_engine("postgresql://scott:tiger#localhost/test",
connect_args={"application_name":"myapp"})
If you're using asyncpg driver, you should use
conn = await asyncpg.connect(server_settings={'application_name': 'foo'})
src - https://github.com/MagicStack/asyncpg/issues/204#issuecomment-333917251

Special character handling when fetching data from MS SQL Server using Perl DBD

I have an MS SQL Server 2008 Database, from which I am fetching data using perl DBD::Sybase module. But there are some special characters in the DB, like the Copyright symbol, Trademark symbol etc., which are not getting imported properly. Perl seems to change all of these special characters to a Question mark character. Is there a way to fix this?
I have tried specifying charset=utf8 in the connection string. The doc mentions a syb_enable_utf8 (bool) setting, but whenever I try that, I get an error:
Can't locate object method "syb_enable_utf8" via package "DBI::db"
One solution I found was this:
use Encode qw(encode_utf8);
Then, wherever you are writing data to a file or anywhere else, use Encode::encode_utf8($data);
where $data is the column/value which you have fetched from MSSQL.
I don't use DBD::Sybase but a) I use a lot of other DBDs and b) I am currently collecting information about unicode support in DBDs. According to the pod you need at least OpenClient 15.x when using syb_enable_utf8. Are you using 15.x or later? Perhaps syb_enable_utf8 is not defined if your client is less than 15.x or perhaps you have too old a version of DBD::Sybase. Unfortunately I cannot see from the Changes file when syb_enable_utf8 was added.
However, when you say "can't locate method" I think that is a clue as syb_enable_utf8 is not a method, it is an attribute (it is under Sybase Specific Attributes) in the pod. So you need to add it to your connect call or set it via a connection handle like this:
my $h = DBI->connect("dbi:Sybase:something","user","password", {syb_enable_utf8 => 1});
or
$h->{syb_enable_utf8} = 1;
You should also read the bits in the pod describing what happens when syb_enable_utf8 is set as it appears from the documents it only applies to UNIVARCHAR, UNICHAR, and UNITEXT columns.
Lastly, you need to ensure you insert the data correctly in the first place. I'd guess if it is not inserted from Perl with syb_enable_utf8 and charset=utf8 and your data is not proper unicode characters in Perl before you insert you'll get garbage back.
The comment Raze2dust made had nothing to do with your issue but is worth heeding if you are going to write the data retrieved from your database elsewhere. Just remember to decode any data input to your script and encode any data output.

Stop Zend_Db from quoting Sybase BIT datatype field values

I'm using Pdo_Mssql adapter against a Sybase database and working around issues encountered. One pesky issue remaining is Zend_Db's instance on quoting BIT field values. When running the following for an insert:
$row = $this->createRow();
...
$row->MyBitField = $data['MyBitField'];
...
$row->save();
FreeTDS log output shows:
dbutil.c:87:msgno 257: "Implicit conversion from datatype 'VARCHAR' to 'BIT' is not allowed. Use the CONVERT function to run this query.
I've tried casting values as int and bool, but this seems to be a table metadata problem, not a data type problem with input.
Fortunately, Zend_Db_Expr works nicely. The following works, but I'd like to be database server agnostic.
$row->MyBitField = new Zend_Db_Expr("CONVERT(BIT, {$data['MyBitField']})");
I've verified that the describeTable() is returning BIT for the field. Any ideas on how to get ZF to stop quoting MS SQL/Sybase BIT fields?
You can simply try this (works for mysql bit type):
$row->MyBitField = new Zend_Db_Expr($data['MyBitField']);

Can the Sequence of RecordSets in a Multiple RecordSet ADO.Net resultset be determined, controlled?

I am using code similar to this Support / KB article to return multiple recordsets to my C# program.
But I don't want C# code to be dependant on the physical sequence of the recordsets returned, in order to do it's job.
So my question is, "Is there a way to determine which set of records from a multiplerecordset resultset am I currently processing?"
I know I could probably decipher this indirectly by looking for a unique column name or something per resultset, but I think/hope there is a better way.
P.S. I am using Visual Studio 2008 Pro & SQL Server 2008 Express Edition.
No, because the SqlDataReader is forward only. As far as I know, the best you can do is open the reader with KeyInfo and inspect the schema data table created with the reader's GetSchemaTable method (or just inspect the fields, which is easier, but less reliable).
I spent a couple of days on this. I ended up just living with the physical order dependency. I heavily commented both the code method and the stored procedure with !!!IMPORTANT!!!, and included an #If...#End If to output the result sets when needed to validate the stored procedure output.
The following code snippet may help you.
Helpful Code
Dim fContainsNextResult As Boolean
Dim oReader As DbDataReader = Nothing
oReader = Me.SelectCommand.ExecuteReader(CommandBehavior.CloseConnection Or CommandBehavior.KeyInfo)
#If DEBUG_ignore Then
'load method of data table internally advances to the next result set
'therefore, must check to see if reader is closed instead of calling next result
Do
Dim oTable As New DataTable("Table")
oTable.Load(oReader)
oTable.WriteXml("C:\" + Environment.TickCount.ToString + ".xml")
oTable.Dispose()
Loop While oReader.IsClosed = False
'must re-open the connection
Me.SelectCommand.Connection.Open()
'reload data reader
oReader = Me.SelectCommand.ExecuteReader(CommandBehavior.CloseConnection Or CommandBehavior.KeyInfo)
#End If
Do
Dim oSchemaTable As DataTable = oReader.GetSchemaTable
'!!!IMPORTANT!!! PopulateTable expects the result sets in a specific order
' Therefore, if you suddenly start getting exceptions that only a novice would make
' the stored procedure has been changed!
PopulateTable(oReader, oDatabaseTable, _includeHiddenFields)
fContainsNextResult = oReader.NextResult
Loop While fContainsNextResult
Because you're explicitly stating in which order to execute the SQL statements the results will appear in that same order. In any case if you want to programmatically determine which recordset you're processing you still have to identify some columns in the result.