The project which I am working now is upgrading the database from mysql to postgreSQL in Zend framework. I had migrated the database to PostgreSQL through "ESF Database Migration Toolkit". How ever the field names like "Emp_FirstName", "Emp_LastName" etc are stored in PostgreSQL as "emp_firstname" and "emp_lastname". This caused Errors in code. However when I updated the filed in PostgreSQL to Emp_FirstName it showing error
********** Error **********
ERROR: column "Emp_FirstName" does not exist
SQL state: 42703
Character: 8
Is it possible to give filed name exactly like in MYSQL?
The migration tool isn't "double quoting" identifiers, so their case is being flattened to lower-case by PostgreSQL. Your code must be quoting the identifiers so they're case-preserved. PostgreSQL is case sensitive and case-flattens unquoted identifiers, wheras MySQL is case-insensitive on Windows and Mac and case-sensitive on *nix.
See the PostgreSQL manual section on identifiers and keywords for details on PostgreSQL's behaviour. You should probably read that anyway, so you understand how string quoting works among other things.
You need to pick one of these options:
Change your code not to quote identifiers;
Change your migration tool to quote identifiers when creating the schema;
Hand migrate the schema instead of using a migration tool;
Fix the quoting of identifers in the tool-generated SQL by hand; or
lower-case all identifiers so it doesn't matter for Pg
The last option won't help you when you add Oracle support and discover that Oracle upper-cases all identifiers, so I'd recommend picking one of the first four options. I didn't find a way to get the migration tool to quote identifiers in a quick 30second Google search, but didn't spend much time on it. I'd look for options to control quoting mode in the migration tool first.
PostgreSQL does not have a configuration option to always treat identifiers as quoted or to use case-insensitive identifier comparisons.
This is far from the only incompatibility you will encounter, so be prepared to change queries and application code. In some cases you might even need one query for MySQL and another for PostgreSQL if you plan to continue to support MySQL.
If you weren't using sql_mode = ANSI and using STRICT mode in MySQL you'll have a lot more trouble with porting than if you were using those options, since both options bring MySQL closer to SQL standard behaviour.
Related
PostgreSQL has excellent support for evaluating JSONPath expressions against JSON data.
For example, this query returns true because the value of the nested field is indeed "foo".
select '{"header": {"nested": "foo"}}'::jsonb #? '$.header ? (#.nested == "foo")'
Notably this query does not reference any schemas or tables. Ideally, I would like to use this functionality of PostgreSQL without creating or connecting to a full database instance. Is it possible to run PostgreSQL in such a way that it doesn't have schemas or tables, but is still able to evaluate "standalone" queries?
Some other context on the project, we need to evaluate JSONPath expressions against JSON data in both a Postgres database and Python application. Unfortunately, Python does not have any JSONPath libraries that support enough of the spec to be useful to us.
Ideally, I would like to use this functionality of PostgreSQL without creating or connecting to a full database instance.
Well, it is open source. You can always pull out the source code for this functionality you want and adapt it to compile by itself. But that seems like a large and annoying undertaking, and I probably wouldn't do it. And short of that, no.
Why do you need this? Are you worried about scalability or ease of installation or performance or what? If you are already using PostgreSQL anyway, firing up a dummy connection to just fire some queries at the JSONB engine doesn't seem too hard.
Working with PostgreSQL, DataGrip converts all names of created tables/columns to lowercase. How to disable it and keep original formatting? I prefer PascalCase.
It happens even if I run a SQL command manually in DataGrip console:
create table FooBar();
so the table foobar is created in db. I searched across the web and found nothing. I suppose it is not a PostgreSQL problem because pgAdmin3 doesn't change anything when it's doing the same things.
My environment:
Windows 7 Pro
DataGrip 2016.3.4
PostgreSQL 9.4
This is a Postgres feature, DataGrip has nothing to this issue.
If you want "PascalCaseIdentifiers" you have to use double quotes.
Identifiers without quotes are case-insensitive. They are automatically converted to lower case.
Read about details in the documentation.
pgAdmin3 doesn't change anything when it's doing the same things.
Quite the opposite, pgAdmin3 adds double quotes when e.g. an identifier contains capital letters (see the last tab SQL in New table... dialog).
In my (and not only my) honest opinion, using quoted identifiers is in general a very bad idea. It simply creates more problems then they are worth it.
I read up several web pages, all spoke about upgrading from Postgres pre-9.0 edition to post-9.1 edition.
www.peterbe.com/plog/postgres-collation-citext-9.1
servoytipsfromsovan.wordpress.com/2014/08/20/migrating-postgres-sql-from-v9-0-to-latest-version/
nandovieira.com/using-insensitive-case-columns-in-postgresql-with-citext
stackoverflow.com/questions/15981197/postgresql-error-type-citext-does-not-exist
databasecm.blogspot.sg/2015/03/where-do-you-find-citext-module-in.html
dba.stackexchange.com/questions/17609/how-do-i-resolve-postgresql-error-no-collation-was-derived-for-column-foo-w
In my case, I upgraded from 9.4 to 9.5. The issue for me is, some database threw the error (as stated on the title) whenever I run a SELECT query yet some don't. I setup a separate test server with 9.4, the query runs well.
I do not need to compare case insensitive. In postgres 9.4, I did not load citext too. In fact all my string comparisons are to be case sensitive unless I use "ILIKE". I also loaded citext individually to ALL the databases in the server.
What information do I need to provide to you so that I can find out why some database works, some don't. And how do I solve the problem I encountered.
I'm trying to understand PostgreSQL and Npgsql in regards to "Full Text Search". Is there something in the Npgsql project that helps doing those searches on a database?
I found the NpgsqlTsVector.cs/NpgsqlTsQuery.cs classes in the Npgsql source code project. Can they be used for "Full Text Search", and, if so, how?
Yes, since 3.0.0 Npgsql has special support for PostgreSQL's full text search types (tsvector and tsquery).
Make sure to read the PostgreSQL docs and understand the two types and how they work.
Npgsql's support for these types means that it allows you to seamlessly send and receive tsvector and tsquery from PostgreSQL. In other words, you can create an instance of NpgsqlTsVector, populate it with the lexemes you want, and then set it as a parameter in an NpgsqlCommand just like any other parameter type (the same goes for reading a tsvector or tsquery).
For more generic help on using Npgsql to interact with PostgreSQL you can read the Npgsql docs.
This is a hybrid question & public service announcement. The basic question is whether there are convenient and/or efficient workarounds to the limitations listed below. I spent half the morning discovering that one cannot just transplant SQL code from Access to Matlab. I've boiled it down to 3 points so far.
Can't use double quotes in SQL statements to avoid collision with
Matlab's string delimiter. The Matlab code for the SQL code strings
can become quite complicated, especially if the SQL strings already
use repeated single-quotes to represent a single quote within a
string constant.
Must always specify a source table from which to query. What won't
work is "SELECT #2015-07-28#". One basically needs to create a 1-row dummy table.
Must always select at least one field in the table being queried.
An asterisk does not seem to suffice.
The above limitations do not exist when submitting SQL code using the Access Query Designer (either in SQL 92 mode or not), nor do these limitations exist when submitting SQL code using VBA via CurrentProject.Connection.Execute.
Hopefully, this saves someone else some time in learning about these differences. And if anyone has found a workaround, that would be appreciated. Note that the above is in the context of using the JDBC/ODBC bridge (3rd of 3 illustrated configurations in the drivers documentation. The database toolbox documentation for directly connecting to an Access file using code (rather than setting up a data source using the GUI) only describes a code pattern that uses the JDBC/ODBC bridge. This is described in Example "Connect to Microsoft Access Using a File DSN" in the "Connect to database" page. I'd like to stick to this approach because I want to quickly be able to directly specify the source *.accdb file without jumping through GUI hoops of setting up a data source.
I've posted this to:
Stack overflow
Usenet
While I have not connected Matlab to Access, I have connected Access databases to PHP, Python, SAS, R, even Excel without any issues with quotes or asterisk. As for the needed source table in queries that is simply due to the Jet/ACE SQL dialect as each dialect have their particular rules. But your query above should work for one-row, one-column output of such date. SQL Server does not require a source table but Oracle does. Access dates are in the MM/DD/YYYY format, requiring # in conditional statements. MySQL use dates in YYYY-MM-DD format, requiring single quotes.
Are you using Windows installed Jet/ACE ODBC drivers? Do note these drivers are softwares external to any application (not part of Access) and can be used by any client interfacing to any data source. Aside - many are not aware that Access' backend database engine, Jet/ACE is actually a Windows technology and not restricted to Access; PC users without Access installed can still use this engine. Hence, connection strings are standard across ODBC calls. And as I see you can apply same principles to Matlab database connections.
With that said, my suggestions/workarounds:
properly learn the SQL dialect for connecting database and check if query runs; even optimize or re-write as needed
try escaping various punctuation not compliant in Matlab strings or use the ASCII counterparts: chr(34) for double quote; chr(39) for single quote; chr(42) for asterisk.
if select statements are limited in Matlab strings, create a temp table from query (use make-table query SELECT * INTO newtable FROM query outside of Matlab), then connect to this new table
use intermediary coding language (VBA, Python, C#) by user trigger or
command line to connect to database and export needed dataset, then
automate Matlab to import
export query to flatfile format (csv, txt, etc.) for Matlab import