PHP 5.6 ODBC returns a special character array for string values from database? - special-characters

array result from odbc Im accessing the intersystem cache database via unix odbc and displaying the details in the website (PHP). Recently I have upgraded the PHP to 5.6 version. Im getting nondisplayable characters (�) for only strings but the numbers and date fields are display correctly. I`m running the website in Apache 2.4 in Debian 8 machine. Intersystem cache database is running in Debian 6 machine.
Note:every time the characters changing randomly.

Related

PostgreSQL Bytea column as varchar results

I have this client where we installed a PGSQL server on his server (PGSQL 13 on port 5432)
There already was a postgresql server installed (8.3, port 15432).
No problems here.
Our application has a few 'bytea' columns where we store some encrypted data. Strange things happen when I launch a SQL on this column.
Exactly the same code gives different results on different computer.
For example: "Select ByteAColumn from RandomTable"
Results on my computer:
"E712F7671E929600C926003C61C4C696C27E89C8D836F85898737799DFAB1CC4"
Result on server where the Postgresql is installed:
"\x45373132463736373145393239363030433932363030334336314334433639364332374538394338443833364638353839383733373739394446414231434334"
Result on other computer in the network of the client:
"E712F7671E929600C926003C61C4C696C27E89C8D836F85898737799DFAB1CC4"
Result on "Select ByteAColumn::varchar from RandomTable" in PGAdmin 4:
\x45373132463736373145393239363030433932363030334336314334433639364332374538394338443833364638353839383733373739394446414231434334
My application expects the "E712F7671E929600C926003C61C4C696C27E89C8D836F85898737799DFAB1CC4" one and throws error on the result with '\x4...'.
I can avoid this using "Select encode(ByteAColumn,'escape') from RandomTable", this gives the right results. Modifying our entire application is near impossible.
In other words: the application runs fine on my computer, works fine on client computers / terminal server but not on the server where the postgreSQL database is hosted.
I've been using this application for over a year now. I have done many installations of this application and never have had this issue before. I've tested the query on other installations where everything is fine, and there I also get the '\x45...' result, but using the query in my application works great.
What causes this?
Is it the PostgreSQL 8.3 server which interferes somehow? I've been testing and searching a whole day now and I can't seem to find the answer.
Help would be greatly appreciated.
Thanks!

Connect CentOS Postgresql database to Apache Tomcat

I've installed both Postgresql and Tomcat on my CentOS 7 VM. I've also populated my table in my Postgres database (a single column with entries 1-1000). My goal is to connect the database to tomcat and to display a webpage that will randomly pick a number from the database and display it. I'm not sure where to start with this. I also have pgadmin3 installed to use as a GUI instead of the command line for Postgres.
You need to set up a Datasource for PostgreSQL in Tomcat. The PostgreSQL JDBC Driver has instructions on how to do this.

oracle 9i client connecting to oracle 12c server

Can Oracle 9.2 client connect to Oracle 12c server?
We have an old server which is an 11g version, and so far there's no problem connecting from client.
Thank You.
That isn't a supported combination, no. As it says in the documentation, you can see the supported client/server combinations at My Oracle Support note 207303.1.
The 9i client used to be supported with an 11g server, but is not with a 12c server. There is a specific note about it:
Attempting to connect from 9.2 to 12.1 will fail with an "ORA-28040: No matching authentication protocol" error.
We are currently using the 9i 32bit Client to connect to Oracle 12c 64bit server in test at the moment and so far it works but with caution.
For example, whenever you query certain data types not available in 9i but available in 12c, the system may crash. We therefore had to build views that convert the timestamp data type to the date data type for example. Then it works fine. Even use of indexes seem to work fine.

Elasticsearch shows umlauts as "??"

Setup:
Ubuntu 12.04 Server installed via VMWare quick install
PostgreSQL 9.1
ElasticSearch 0.90
Mono 3.2.1
Rails 4
Nginx 1.4.2 + Passenger 4.0.16
I have a C# program that on start writes a new ElasticSearch index and points the alias that is used by the rails applications to it, the program then keeps going and watches a redis instance for things to update.
There is another C# program that scrapes data from web pages, once scraped they are put into Postgresql and the index writer above is notified via Redis. Those pages have varying encodings and are converted to UTF-8.
The first appearance of this bug was when I made a mistake and encoded data that was already UTF-8 as UTF-8 again.
Investigation
Now I thought that I obviously have some data corruption going on but the weird thing is: The umlauts are only corrupted when I start the indexing mono process from rails via nohup, if I kill this process and manually start it from the command line it works perfectly fine.
When I do a backup/restore of the database it works again from web interface but once the server is rebooted the umlauts are again replaced with ?? when starting the mono process from the web interface.
The first thing I did was to purge the affected rows from the database and scrape the data again (without encoding it twice), that didn't help and since the error only appears when running it as non-interactive via nohup from the rails application I assumed it is because of the locale setting so I changed that in both, /etc/defaults/locale and /etc/environment to en_US.UTF-8 and en_US:en but that did not help either.
I really have no idea what else I can do or what exactly causes this error, any help would be appreciated.
edit: I forgot to clarify the most important part, when umlauts are replaced with ?? ALL umlauts are replaced in every single document in the index.
Put this in the script that you use to start your process:
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
The reason that your script only picks up the UTF-8 when you start things manually is that these things are not system wide. I've run into this with jruby and init.d scripts before and the solution is to not rely on defaults for this.

Retrieve multilingual data (Chinese, Japanese...) from SQL Server 2008 R2 and display in Java webapp

I have Chinese data in my db and I need to display it in my Java web app. However I am getting ??? as output.
Database Used: SQL Server 2008 R2 (nvarchar datatype is used in order to support Unicode data and db is created with default collation name i.e. SQL_Latin1_General_CP1_CI_AS and there is no problem while storing the data in db).
Development Environment: Window 7
Treegrid is used to display data.
I have already:
1. set charset and pageEncoding to UTF-8 in my HTML, jsp and Java
pages.
2. Updated my jdbc connection with useUnicode=true;characterEncoding=UTF-8;.
3. Configured Tomcat’s server.xml connector to use UTF-8 (URIEncoding="UTF-8").
I have once set collation_name to Latin1_General_CI_AI still it's not working.
Latin1_General_CI_AI --> There's part of your problem. Latin1 has nothing to do with Unicode. Getting "???" means there's an encoding problem somewhere in your toolchain, where your UTF-8 data gets scrambled into another encoding.