Yes, I know there is a stack overflow question with the exact same name:
Queries in pg_stat_activity are truncated?
I set this value to 7168, rebooted the server, and verified it with show track_activity_query_size. It actually shortened the amount of text shown, its now truncating at 256 characters.
What am I missing?
Edit: My database is an AWS RDS instance (db.t2.small) runningPostgreSQL 9.3.6
I was using PGAdmin. The query actually was returning a larger value, but PGAdmin was truncating it and I was copy/pasting it from the column out. The solution (for anybody else having this problem) is to use Query -> execute to file. This will write the full results, which you can then look at in the CSV file that it writes.
Related
Background of the issue (could be irrelevent, but only relation to these issues makes sense for me)
In our production environment, disk space had run out. (We do have monitoring and notifications for this, but no-one read them - the classical)
Anyway, after fixing the issue Postgresql PostgreSQL 9.4.17 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit has shown couple of weird behaviors.
1. Unique indexes
I have couple of (multi column) unique indexes specified for the database, but they do not appear to be functioning. However I can find duplicate rows from the database.
2. Sorting based on date
We got one table which is basically just logging some json data. We got three columns: id, json and insertedAt DEFAULT NOW(). If I do simple query where I try to sort based on the insertedAt column, the sorting doesn't work around the area of disk overflow. All of the data is valid and readable, but order is invalid
3. Db dumps / backups are having some corruption.
Again, when I was browsing this logging data and tried to recover a backup to my local machine for better observation it gave an error around some random row. When I examined the sql-file with text editor, I encountered that data was otherwise valid expect that it was missing some semicolons on some rows. I think I'll give a try shortly for the never backup if it's still having the same error or if it was random issue with the one backup I tried playing with.
I've tried the basic ones: restarting the machine and PG process.
So I have a decent size database (roughly 8 million rows) that I need to pull data from. It needs to be output into a CSV that can be opened by Excel. I've tried virtually every solution I found, to no avail.
\copy - Puts all values in a single column, separated by ','.
copy...to...with csv header - same result as above.
copy...into outfile - refuses to work. Claims there's something wrong with my path, when I used the same path as before
I'm not the most experienced with my SQL to say the least, but I'll try my best to provide any information necessary.
Have you try mysql DUMP?
I have experience like you, to backup or upload 11 million data.
And success with mysqlDUMP, maybe you can seach like mysqldump for PostGresql
I can run a query on views in SQL Developer 3.1.07 and it returns the results I expect. My co-worker, who is in Mexico using the same user, can connect to the same database, sees the same views, runs the same query and gets no results, even from a simple "select * from VIEWNAME" query. The column headers display, but no data. If he selects a view from the connections window and selects the DATA tab no data displays. This user does not have access to any tables on this specific database.
I'm not certain he is running the same version of Developer, but it's not far off. I have checked as many settings in SQL Developer that I think could be the issue, but see no significant difference in his settings from mine.
Connecting to another database allows him to access data in both tables and views
Any thoughts on what we're missing?
I know I'm a few years late, but check if the underlying view doesn't filter on something that is based on localisation! I just had the issue and it turned out to be a statment like this that was causing issues:
SELECT *
FROM sometable
WHERE language = userenv('LANG')
Copy the JDBC folder from your oracle home and copy it over to your c-workers machine. we had the same issue and replacing the JDBC folder worked.
Faced the same which got resolved when I checked the 'skip NLS settings' box. My query was returning zero results earlier but when I ran the same query again, I could see the table rows.
Since your co-worker is in a different country, most probably the NLS settings (related to the language) are the culprit here.
I was facing the same issue, turned out that the update to the database from my sqldevelolper was not commited to the main database, that's why, I was getting results on my sqldeveloper for that query, but from aws it was returning empty results. When I chatted with DBA, he could find stale data. After I committed the data from my sqldeveloper, the db was actually updated.
In a recent update of run-time engine and SQL Server version (2008R2 to 2012) I have begun experiencing an issue where largish queries through ODBC come back with blank fields where there should not be any. The same query run directly in SQL Server worked fine.
I started to delete fields from the query and found that it was the five TEXT datatype fields in the query that were giving me trouble. The first TEXT field listed in the SELECT statement would show up fine, and subsequent TEXT fields would not show up. If I deleted all but two fields from the query, the remaining two would come through.
Since the problem is clearly occurring within the ODBC, my first thought was to switch my windows 8 odbc drivers from "SQL Server Native Client 11.0" to "SQL Server". This did not help.
Since TEXT is on the way out of support, I thought it might be the culprit. I converted all the TEXT fields to NVARCHAR(MAX) (I am also looking for unicode support). This did not fix anything. Next I tried converting the out-of-page datatypes to an in-page format NVARCHAR(4000). This fixed the problem, but it does not work across the board, because I have some fields that are longer than 4000 characters.
My questions:
What is the limitation of ODBC related to out-of-page data that is causing this issue. My understanding is that nvarchar(max) data is only stored out-of-page if it is sufficiently long (am I wrong about this). In the example table that I'm working with, none of the text data fields are longer than 255 characters, however the problem still occurs.
I could probably get by if I could figure out which fields need the extra length and only leave those fields in out-of-page representation. However, the size of the application makes figuring out the exact (and possible) use of every field time prohibitive. I hope I don't have to go this route.
What the title says -
Msg 9002, Level 17, State 4, Line 1
The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
The query in question first pulls out some rows from a databased on the linked server (matching strings in the server I'm querying from), stores them in a table, then uses this table to pull out more matching records from the linked server. This is what I get on the second part.
The basic question is, is there something else hiding in this error message? Which tempdb is it, my tempdb, or the linked server's tempdb? I don't think mine can be a problem, as increasing the size doesn't help, the recover mode is simple, autogrowth is on, etc.
You first need to check your SQL Server's tempDB.... is the drive that TempDB and its log got lots of free disc space? It might be on two different drives. I would expect such an error to write a message in the SQL Server error log - have you checked there as well at the time of the problem? You then need to do the same on the remote server, if you have access.
Whether it's tempDB or a user/application database, just because the recovery model is simple doesn't mean that the transaction log won't grow - and take all the disk space! It does make it less likely, but long transactions can cause it to "blow".