Is there a way to set the max width of a column when displaying JSONB results in psql? - postgresql

I have a problem that is somewhat similar to this question: Is there a way to set the max width of a column when displaying query results in psql?. I have a number of tables in Postgres with large JSONB documents. When I use psql from Emacs, it grinds to a halt trying to display fields with these documents. Ideally, I just want to see the first X characters of a document when I select * from a given table. I tried:
\pset columns 20
To no avail. Is there some permutation of columns and format that could achieve this? At present I am manually casting columns like so:
cast("PAYLOAD" as varchar(50))
This works, but it does mean I need to remember to cast before selecting which is a major pain.

Related

how to multiply variable to each element of a column in database

I am trying to add a column to a collection by multiplying the 0.9 to existing database column recycling. but I get a run time error.
I tried to multiply 0.9 direction in the function but it is showing error, so I created the class and multiplied it there yet no use. what could be the problem?
Your error message is telling you what the problem is: your database query is using GROUP BY in an invalid way.
It doesn't make sense to group by one column and then select other columns (you've selected all columns in your case); what values would they contain, since you haven't grouped by them as well (and get one row returned per group)? You either have to group by all the columns you're selecting for, and/or use aggregates such as SUM for the non-grouped columns.
Perhaps you meant to ORDER BY that column (orderBy(dt.recycling.asc()) if ascending order in QueryDSL format), or to select all rows with a particular value of that column (where(dt.recycling.eq(55)) for example)?

In PostgreSQL, is it possible to have a default format for real columns?

In PostgreSQL, I have a column with people's height in meters. If the height is, say 1.75 m, it shows properly, but if the height is 1.70 m, it shows as 1.7. I would like to have this already formatted to two decimal places, showing as 1.70 without formatting in each and every SQL call. Can I specify this in the table creation? Or a stored procedure, or something? I've seen a few things about timestamps, but not for real fields. Knowing how to format the decimal point as a colon (1,70) would be a plus.
Basically, presentation and "cosmetics" are the job of the application, not the database.
Having a default number of decimal places for floats would also create a problem, because the data returned by the database would not be the actual data in the column. So if you did a SELECT and it returned a value of 1.75, then if you searched for this value, you might not find it because the actual value stored was not 1.75 but 1.7499999999 and it was only rounded for display.
Potential solutions:
If you want to store a specified number of digits, use NUMERIC. This will solve the 1.7499999999 problem above. If you use NUMERIC, when doing a SELECT you get the actual contents of the column.
In your app, if you use an ORM, use a Decimal (or similar) type for the column with the appropriate settings so it displays the way you want.
Or create a view with the format applied to the column, but in this case if you want the trailing zero, the type will be text and not float, and it will not be searchable unless you create an extra index on it.
Generated column with the number formatted as you want, maybe easier than a view

UPDATE SQL Command not saving the results

I looked though the forum but I couldn't find a issue like mine.
Essentially I have a table called [p005_MMAT].[dbo].[Storage_Max]. It has three columns Date, HistValue and Tag_ID. I want to make all the values in 'HistValue' column to have 2 decimal places. For example if a number is 1.1, I want it to be 1.10 or if its 1 then also I want it to look like 1.00.
Here is the sql update statement I am using
update [p005_MMAT].[dbo].[Storage_Max]
set [HistValue] = cast([HistValue] as decimal (10,2))
where [Tag_ID] = 94
After executing the query it says 3339 rows affected but when I perform a simple select statement it appears the column had no affect of. I have used that cast function in select statement and it adds two decimal places.
Please advice.
The problem is the datatype and SQL Server. Float or real will not have the trailing zeros. You either have to change the datatype of the column or just deal with it and handle the formatting in your queries or application.
You could run something like the following
select
cast([HistValue] as decimal (10,2))
from [p005_MMAT].[dbo].[Storage_Max]
where [Tag_ID] = 94

How do I change the max column width in PostgreSQL?

I have simple SQL query that selects a few rows from one table. One of the columns contains very long strings. I would like to set a maximum column width so that it is easier to read. I don't have access to environment variables through \pset.

Get size of all columns on a DB2 table

I have been asked to determine how much data our application uses and how fast it is growing. The problem is many applications share the same database and tables with a column being used to determine which application the data belongs to. It is a DB2 database.
Is there any way to find the size in bytes of all the columns a table uses for a given row? It is important that I select only those rows that belong to my application.
If a column is not nullable I do not include it in the SQL I just multiply its size by the row count. I am primarily trying to determine the average size of nullable and variable size columns (we use VARCHAR and BLOB).
At the moment what I am doing looks something like this:
SELECT VALUE(LENGTH(COLUMN_1), 0) AS LEN_COL_1, repeat for each variable size column
FROM TABLE T
WHERE T.APP_ID = my app
The most accurate way to determine size would be to look at the sizes of the files that make up the DB2 tables.
Divide the file sizes by the percentage of rows that belong to your application.
This way, you count most of DB2's overhead size, including indexes.