I have simple SQL query that selects a few rows from one table. One of the columns contains very long strings. I would like to set a maximum column width so that it is easier to read. I don't have access to environment variables through \pset.
Related
I am trying to add a column to a collection by multiplying the 0.9 to existing database column recycling. but I get a run time error.
I tried to multiply 0.9 direction in the function but it is showing error, so I created the class and multiplied it there yet no use. what could be the problem?
Your error message is telling you what the problem is: your database query is using GROUP BY in an invalid way.
It doesn't make sense to group by one column and then select other columns (you've selected all columns in your case); what values would they contain, since you haven't grouped by them as well (and get one row returned per group)? You either have to group by all the columns you're selecting for, and/or use aggregates such as SUM for the non-grouped columns.
Perhaps you meant to ORDER BY that column (orderBy(dt.recycling.asc()) if ascending order in QueryDSL format), or to select all rows with a particular value of that column (where(dt.recycling.eq(55)) for example)?
I have a table in Grafana that has several columns and uses the gauge display mode. Setting the min/max values for these columns is troublesome. If the column is a percentage value the max is always known so can be hard coded as 100 or 1. But for example a column displaying database sizes the max is not known and will change.
I am running Grafana 8.1.2 so tried the new 'config from query results' transform for the first time. This works fine to alter the values of a single column but not for more than one.
Grafana Table
As you can see in the attached picture I have set the max for the database size using the new transform but I also need to be able to set the max for the log size column too.
The dashboard has 2 queries in it, both for MSSQLServer. Query A returns the results in a table format and Query B returns the config settings: query result
I've then got the transform set up as follows: transform set up
Is there a way to set the min\max settings for multiple columns using this new transform that I'm missing or some other technique to do it. Unfortunately (for me) Grafana seems to favour time series data so isn't as configurable for table data.
I have a table which columns are location and credit, the location contains string rows which mainly is location_name and npl_of_location_name. the credit contains integer rows which mainly is credit_of_location_name and credit_npl_of_location_name. I need to make a column which calculates the ((odd rows of the credit - the even rows of the credit)*0.1). How do i do this?
When you specify "odd rows" and "even rows" are you referring to row numbers? Because, unless your query sorts the data, you have not control over row order; the database server returns rows however they are physically stored.
Once you are sure that your rows are properly sorted, then you can use a technique such as Mod(#INROWNUM,2) = 1 to determine "odd" and zero is even. This works best if the Transformer is executing in sequential mode; if it is executed in parallel mode then you need to use a partitioning algorithm that ensures that the odd and even rows for a particular location are in the same node.
I have a problem that is somewhat similar to this question: Is there a way to set the max width of a column when displaying query results in psql?. I have a number of tables in Postgres with large JSONB documents. When I use psql from Emacs, it grinds to a halt trying to display fields with these documents. Ideally, I just want to see the first X characters of a document when I select * from a given table. I tried:
\pset columns 20
To no avail. Is there some permutation of columns and format that could achieve this? At present I am manually casting columns like so:
cast("PAYLOAD" as varchar(50))
This works, but it does mean I need to remember to cast before selecting which is a major pain.
I have been asked to determine how much data our application uses and how fast it is growing. The problem is many applications share the same database and tables with a column being used to determine which application the data belongs to. It is a DB2 database.
Is there any way to find the size in bytes of all the columns a table uses for a given row? It is important that I select only those rows that belong to my application.
If a column is not nullable I do not include it in the SQL I just multiply its size by the row count. I am primarily trying to determine the average size of nullable and variable size columns (we use VARCHAR and BLOB).
At the moment what I am doing looks something like this:
SELECT VALUE(LENGTH(COLUMN_1), 0) AS LEN_COL_1, repeat for each variable size column
FROM TABLE T
WHERE T.APP_ID = my app
The most accurate way to determine size would be to look at the sizes of the files that make up the DB2 tables.
Divide the file sizes by the percentage of rows that belong to your application.
This way, you count most of DB2's overhead size, including indexes.