Can we decrease CLOB\BLOB size after increasing? - db2

Am getting SQL0302N error when updating a document. So I got suggestion to increase BLOB\CLOB size. I want to see how much is current size in my db2 table. How can I do that?
http://www.ibm.com/support/knowledgecenter/SSAHQR_8.4.3/com.ibm.administeringcm.doc/trs20002.htm
This link will explain more on what steps am trying to do. I have plan to run the procedure present in link before that I wan to check size.

Related

Disappearance of dataset rows when is built a recipe

I upload the dataset into the storage of google cloud ai. Next, I open the flow in dataprep and put there the dataset. When I made the first recipe (without any step already) the dataset has approximately half of its original rows, that is, 36 234 instead of 62 948.
I would like to know what could be causing this problem. Some missing configuration?
Thank you very much in advance
Here are a couple thoughts . . .
Data Sampling
Keep in mind that what's shown in the Dataprep editor is typically a sample of the data, not the full data (unless its very small). If the full file was small enough to load, you should see the "Full Data" label up where the sample is typically shown:
In other cases, what you're actually looking at is a sample, which will also be indicated:
It's very beneficial to have an idea of how Dataprep's sampling works if you haven't reviewed the documentation already:
https://cloud.google.com/dataprep/docs/html/Overview-of-Sampling_90112099
Compressed Sources:
Another issue I've noticed occasionally is when loading compresses CSVs. In this case, I've had the interface tell me that I'm looking at the "Full Data"—but the number of rows is incorrect. However, any time this has happened the job does actually process the full number of rows.

Can you increase the maximum automatic open size?

I'm using sql-developer to export just the DDl of an oracle database with 3 schema's in it.
The export ran for approx 12 hours, then popped up a message stating
File export.sql was not opened because it exceeds the maximum automatic open size
I've got 2 questions really
Has the export finished at this point?
If it hasn't, is there a way to increase the maximum automatic open size?
I haven't used sql developer to export a DDL before, so not sure if this is just the tool trying to open the file after the successful export.
Amny tips or help greatly appreciated.
Yes, the file is there on your disk, where you told us to put it.
There's no way to increase this limit.
You can open the file if you want to, but I'd caution against this if the file is very large...if you want to execute it, use the #file.sql notation.
If you want to browse it, use tail or head.

Full export doesn't function

I have an ag-grid with more than 100s of rows, but, as the screen vicinity is limited, we show about 20+ rows. As you scroll down, the other records were loaded asynchronously.
According to the documentation of ag-grid, it says that "exportDataAsCsv(params): Does the full export.." at this link https://www.ag-grid.com/javascript-grid-export/
But, the problem is that it doesn't export full records unless they were loaded fully onto the grid.
Is there any way to download the records, without scrolling to the end?
I expect that all rows will be exported without reaching the end of the grid
Thanks for reading

Does writing custom sql increases size of tde?

Does writing custom sql increases size of tde?
i am working on tableau. I have observed that when i used custom sql instead of including table directly, size of tde has been increased drastically.
Is is because of custom sql or there can be any other reason.?
Though i have not faced this issue, i think you can have a look at this link which might help you to consider while creating an extract.
http://kb.tableau.com/articles/knowledgebase/tips-working-with-extracts

DB Trigger to limit maximum table size in Postgres

Is it possible, perhaps using DB-triggers to set a maximum table-size in a postgres DB?
For example, say I have a table called: Comments.
From the user perspective, this can be done as frequently as possible, but say I only want to store the 100 most recent comments in the DB. So what I want to do is have a trigger that automatically maintains this. I.e. when more than 100 comments are there, it deletes the oldest one, etc.
Could someone help me with writing such a trigger?
I think a trigger is the wrong tool for the job; although it is possible to implement this. Something about spawning a "delete" from an executing insert makes the hair on my neck neck stand up. You will generate a lot of locking and possibly contention that way; and inserts should generally not generate locks.
To me this says "stored procedure" all the way.
But I also think you should ask yourself, "why delete" old comments? Deletes are an anathema. Better just limit them when you display them. If you are really worried about the size of the table, use a TEXT column. Postgres will maintain these in a shadow table and full scans of the original table will blaze along just fine.
Limiting to 100 comments per user is rather simple, e.g.
delete from comments where user_id = new.user_id
order by comment_date desc offset 100;
Limiting the byte size is trickier. You'd need to calculate the relevant row sizes and that won't account for index sizes, dead rows, etc. At best you'd use the admin functions to get the table size but these won't yield the size per user, only the total size.
We could in theory create a table of 100 dummy records and then simply overwrite them with the actual comments. Once we pass the 100th we will overwrite the 1st one, etc.
This way we are suppose to keep the same size of the table, but that is not possible, because an update is equivalent to delete,insert in Postgresql. So the size of the table will continue to grow.
So if the objective is not to overflow the disk drive then once the disk is full at 80% a "vacuum full" should be performed to free up disk space. "Vacuum full" requires disk space by itself. If you kept the records to a fixed number then there will be an effect of the vacuum. Also there seems to be cases where vacuum can fail.