I am using C++ Builder 10.2.3 (Rad Studio Tokyo 10.2.3) with Interbase 2017
I need to create users at runtime for my users registration.
If I create the Query at runtime, in that case there is no parameter, it works. But this creates problems with MBCS characters I will explain later.
If I create the Query at design-time with parameters and try to set the parameters at runtime. I am getting the error message below:
[Application: ]
[Error] -104 335544569 Dynamic SQL Error
SQL error code = -104
Token unknown - line 2, char 14
?
The query I am using is below:
CREATE USER myuser
SET PASSWORD :mypass,
FIRST NAME :myfirstname,
LAST NAME :myname;
I replace the first line of the Query at runtime, so there is no character. And after all, Interbase cannot handle MBCS characters in USERNAME.
I need to use a Query with parameters because my application handles multi-bytes characters (MBCS), like Chinese and Japanese. And this is the only option to be sure of a proper conversion to UTF8 in Interbase. Because if the conversion of MBCS characters is not done, I cannot backup and restore my database. When I try to restore with MBCS characters in First and last name, I am getting an error message that Interbase cannot transliterate between character sets.
Base on the error message, it appears to me that it does not recognize the Query parameters.
I tried with both "TIBQuery" and "TIBSQL". Same issue. Impossible to use also Store procedures. Does not recognize the create word.
So, how to fix that ?
Related
In a parameterized query issued from c# code to PostgreSQL 10.14 via dotConnect 7.7.832 .NET connector, I select either a parameter value or the local timestamp, if the parameter is NULL:
using (var cmd = new PgSqlCommand("select COALESCE(#eventTime, LOCALTIMESTAMP)", connection)
When executed, this statement throws the error in subject. If I comment out the corresponding parameter
cmd.Parameters.Add("#eventTime", PgSqlType.TimeStamp).Value = DateTime.Now;
and hardcode
using (var cmd = new PgSqlCommand("select COALESCE('11/6/2020 2:36:58 PM', LOCALTIMESTAMP)", connection)
or if I cast the parameter
using (var cmd = new PgSqlCommand("select COALESCE(cast(#eventTime as timestamp without time zone), LOCALTIMESTAMP)", connection)
then it works. Can anyone explain what # operator in the error is referring to and why the error?
In the case that doesn't work, your .Net connection library seems to be passing an SQL command containing a literal # to the database, rather than substituting it. The database assumes you are trying to use # as a user defined operator, as it doesn't know what else it could possibly be. But no such operator has been defined.
Why is it doing that? I have no idea. That is a question about your .Net connection library, not about PostgreSQL itself, so you might want to add tag.
The error message you get from the database should include the text of the query it received (as opposed to the text you think it was sent) and it is often useful to see that in situations like this. If that text is not present in the client's error message (some connection libraries do not faithfully pass this info along) you should be able to pull it directly from the PostgreSQL server's log file.
I program with libpq.so. I want to get the error code which is called sql state in SQL Standard.How should I get this in my c code?
The obvious Google search for libpq get sqlstate finds the libpq-exec documentation. Searching that for SQLSTATE finds PG_DIAG_SQLSTATE in the PQresultErrorField section.
Thus, you can see that you can call PQresultErrorField(thePgResult, PG_DIAG_SQLSTATE) to get the SQLSTATE.
This is just addition I can not leave in form of comment due to reputation reasons.
Note that you can not portably get SQLSTATE for errors that can occur during PQconnectdb. In theory, you can read pg_conn (internal struct) field last_sqlstate which contains correct value.
For example, if you try to connect with invalid login/password, it will give you 28P01. For wrong database it will contain 3D000.
I wish they defined publically available getter for this field.
You can check this one as well:
libpq: How to get the error code after a failed PGconn connection
Looking at the output of select * from pg_stat_activity;, I see a column called application_name, described here.
I see psql sets this value correctly (to psql...), but my application code (psycopg2/SQLAlchemy) leaves it blank.
I'd like to set this to something useful, like web.1, web.2, etc, so I could later on correlate what I see in pg_stat_activity with what I see in my application logs.
I couldn't find how to set this field using SQLAlchemy (and if push comes to shove - even with raw sql; I'm using PostgresSQL 9.1.7 on Heroku, if that matters).
Am I missing something obvious?
the answer to this is a combination of:
http://initd.org/psycopg/docs/module.html#psycopg2.connect
Any other connection parameter supported by the client library/server can be passed either in the connection string or as keywords. The PostgreSQL documentation contains the complete list of the supported parameters. Also note that the same parameters can be passed to the client library using environment variables.
where the variable we need is:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-APPLICATION-NAME
The application_name can be any string of less than NAMEDATALEN characters (64 characters in a standard build). It is typically set by an application upon connection to the server. The name will be displayed in the pg_stat_activity view and included in CSV log entries. It can also be included in regular log entries via the log_line_prefix parameter. Only printable ASCII characters may be used in the application_name value. Other characters will be replaced with question marks (?).
combined with :
http://docs.sqlalchemy.org/en/rel_0_8/core/engines.html#custom-dbapi-args
String-based arguments can be passed directly from the URL string as query arguments: (example...) create_engine() also takes an argument connect_args which is an additional dictionary that will be passed to connect(). This can be used when arguments of a type other than string are required, and SQLAlchemy’s database connector has no type conversion logic present for that parameter
from that we get:
e = create_engine("postgresql://scott:tiger#localhost/test?application_name=myapp")
or:
e = create_engine("postgresql://scott:tiger#localhost/test",
connect_args={"application_name":"myapp"})
If you're using asyncpg driver, you should use
conn = await asyncpg.connect(server_settings={'application_name': 'foo'})
src - https://github.com/MagicStack/asyncpg/issues/204#issuecomment-333917251
I have an MS SQL Server 2008 Database, from which I am fetching data using perl DBD::Sybase module. But there are some special characters in the DB, like the Copyright symbol, Trademark symbol etc., which are not getting imported properly. Perl seems to change all of these special characters to a Question mark character. Is there a way to fix this?
I have tried specifying charset=utf8 in the connection string. The doc mentions a syb_enable_utf8 (bool) setting, but whenever I try that, I get an error:
Can't locate object method "syb_enable_utf8" via package "DBI::db"
One solution I found was this:
use Encode qw(encode_utf8);
Then, wherever you are writing data to a file or anywhere else, use Encode::encode_utf8($data);
where $data is the column/value which you have fetched from MSSQL.
I don't use DBD::Sybase but a) I use a lot of other DBDs and b) I am currently collecting information about unicode support in DBDs. According to the pod you need at least OpenClient 15.x when using syb_enable_utf8. Are you using 15.x or later? Perhaps syb_enable_utf8 is not defined if your client is less than 15.x or perhaps you have too old a version of DBD::Sybase. Unfortunately I cannot see from the Changes file when syb_enable_utf8 was added.
However, when you say "can't locate method" I think that is a clue as syb_enable_utf8 is not a method, it is an attribute (it is under Sybase Specific Attributes) in the pod. So you need to add it to your connect call or set it via a connection handle like this:
my $h = DBI->connect("dbi:Sybase:something","user","password", {syb_enable_utf8 => 1});
or
$h->{syb_enable_utf8} = 1;
You should also read the bits in the pod describing what happens when syb_enable_utf8 is set as it appears from the documents it only applies to UNIVARCHAR, UNICHAR, and UNITEXT columns.
Lastly, you need to ensure you insert the data correctly in the first place. I'd guess if it is not inserted from Perl with syb_enable_utf8 and charset=utf8 and your data is not proper unicode characters in Perl before you insert you'll get garbage back.
The comment Raze2dust made had nothing to do with your issue but is worth heeding if you are going to write the data retrieved from your database elsewhere. Just remember to decode any data input to your script and encode any data output.
I am using the next instructions to get some registers from my Database.
Create the needed models (from the params module):
$obj_paramtype_model = new Params_Model_DbTable_Paramtype();
$obj_param_model = new Params_Model_DbTable_Param();
Getting the available locales from the database
// This returns a Zend_Db_Table_Row_Abstract class object
$obj_paramtype = $obj_paramtype_model->getParamtypeByValue('available_locales');
// This is a query used to add conditions to the next sentence. This is executed from the Params_Model_DbTable_Param instance class, that depends from Params_Model_DbTable_Paramtype class (reference map and dependentTables arrays are fine in both classes)
$obj_select = $this->select()->where('deleted_at IS NULL')->order('name');
// Execute the next query, applying the select restrictions. This returns a Zend_Db_Table_Rowset_Abstract class object. This means "Find Params by Paramtype"
$obj_params_rowset = $obj_paramtype->findDependentRowset('Params_Model_DbTable_Param', 'Paramtype', $obj_paramtype);
// Here the firebug log displays the queries....
Zend_Registry::get('log')->debug($obj_params_rowset);
I have a profiler for all my DB executions from Zend. At this point the log and profiler objects (that includes Firebug writers), shows the executed SQL Queries, and the last line displays the resulting Zend_Db_Table_Rowset_Abstract class object. If I execute the SQL Queries in some MySQL Client, the results are as expected. But the Zend Firebug log writer displays as NULL the column values with latin characters (ñ).
In other words, the external SQL client shows
es_CO | Español de Colombia and en_US | English of United States but the Query results from Zend displays (and returns) es_CO | null and en_US | English of United States.
I've deleted the ñ character from Español de Colombia and the query results are just fine in my Zend Log Firebug screen, and in the final Zend Form element.
The MySQL database, tables and columns are in UTF-8 - utf8_unicode_ci collation. All my zend framework pages are in UTF-8 charset. I'm using XAMPP 1.7.1 (PHP 5.2.9, Apache at port 90 and MySQL 5.1.33-community) running on Windows 7 Ultimate; Zend Framework 1.10.1.
I'm sorry if there is so much information, but I don't really know why could that happen, so I tryed to provide as much related information as I could to help to find some answer.
how do you make the connection with the database and did you define the charset in that connection? That was something I overlooked and as I can't find it in your question that might be the problem?
So, as I use the .ini file to setup the database, adding
resources.db.params.charset = utf8
Solved all my query UTF8 issues.
Hope this helps.