How to select database columns using list of integers in Anylogic? - anylogic

I have imported a large database in anylogic, having various columns. The rows can be selected using unique primary key in database table. Similarly, how can i move through columns using integer indexes?
The attached picture shows the selection query of encircled cell, to get to other cell i need to change columns again in query which is surely not efficient 1.

Related

Turning cells from row into a column and marking them as a primary key? (Postgresql)

So my following table is like this:
Tower_ID|...|Insulator_ID_01|Insulator_ID_02|...|Insulator_ID_12|
Tower_01|...|01_Unique_ID_01|01_Unique_ID_02|...|01_Unique_ID_12|
Tower_02|...|02_Unique_ID_01|02_Unique_ID_02|...|02_Unique_ID_12|
Then the idea is to have a single table for every insulator that belongs to towers in this specific line (towers in this line is the table). But the only way I know how is to have a table for each column of insulators. Is it possible to create a single table with relationships which would store Insulator_ID_01 to Insulator_ID_12 in a column before going into the next row of the and doing the same?

Update large number of rows in postgres

The table contains around 80k existing rows.
I want to add a new column and want to update its value with the existing column's value.
What could be the better approach?
Batch update
Cursor to iterate through the rows and apply a separate update to each one
Hot table in postgres.
Please help.

select all columns except two in q kdb historical database

In output I want to select all columns except two columns from a table in q/kdb historical database.
I tried running below query but it does not work on hdb.
delete colid,coltime from table where date=.z.d-1
but it is failing with below error
ERROR: 'par
(trying to update a physically partitioned table)
I referred https://code.kx.com/wiki/Cookbook/ProgrammingIdioms#How_do_I_select_all_the_columns_of_a_table_except_one.3F but no help.
How can we display all columns except for two in kdb historical database?
The reason you are getting par error is due to the fact that it is a partitioned table.
The error is documented here
trying to update a partitioned table
You cannot directly update, delete anything on a partitioned table ( there is a separate db maintenance script for that)
The query you have used as fix is basically selecting the data first in-memory (temporarily) and then deleting the columns, hence it is working.
delete colid,coltime from select from table where date=.z.d-1
You can try the following functional form :
c:cols[t] except `p
?[t;enlist(=;`date;2015.01.01) ;0b;c!c]
Could try a functional select:
?[table;enlist(=;`date;.z.d);0b;{x!x}cols[table]except`colid`coltime]
Here the last argument is a dictionary of column name to column title, which tells the query what to extract. Instead of deleting the columns you specified this selects all but those two, which is the same query more or less.
To see what the functional form of a query is you can run something like:
parse"select colid,coltime from table where date=.z.d"
And it will output the arguments to the functional select.
You can read more on functional selects at code.kx.com.
Only select queries work on partitioned tables, which you resolved by structuring your query where you first selected the table into memory, then deleted the columns you did not want.
If you have a large number of columns and don't want to create a bulky select query you could use a functional select.
?[table;();0b;{x!x}((cols table) except `colid`coltime)]
And show all columns except a subset of columns. The column clause expects a dictionary hence I am using the function {x!x} to convert my list to a dictionary. See more information here
https://code.kx.com/q/ref/funsql/
As nyi mentioned, if you want to permanently delete columns from an historical database you can use the deleteCol function in the dbmaint tools https://github.com/KxSystems/kdb/blob/master/utils/dbmaint.md

SQL Server DB Design - Single table with 150 Columns in one table or dynamic Pivot

I'm recreating a DB and I have a table with 150 columns and it has 700 rows currently (small dataset) - It will likely take 10 more years to get to 1000 rows.
My question:
Most of my data is normalized. About 125 fields contain a single numeric value (hours, currency, decimals, and integers). There are 10 or so columns that can have multiple values.
Do I continue to use the single table with 150 Rows?
Or
Do I create cross-reference tables and use a pivot query to turn my rows into columns? Something like this:
**c_FieldNames** **cx_FieldValues** **Project**
id int identity (PK) id int identity(1,1) ProjID int (PK)
fkProjectID int ProjectName
FieldName nvarchar FieldNameID int (FK to id from c_fieldNames)
Decimals nvarchar(2) FieldValue numeric(16,2)
The decimals would tell me how many decimal places a given field would need - I'd like to incorporate that into my query... Not sure if that's possible.
For each of my 125 fields with numbers, I would create a row in the cx_FieldNames table which would get an ID. That ID would be used in the FieldNameID as a foreign key.
I would then create a view a pivot table that would create a table of the 125 rows dynamically in addition to my standard table or so rows to look like the table with 150 columns.
I'm pretty sure I will be able to use a pivot table to turn my rows into columns. (Dynamically display rows as columns)
Benefits:
I could create a table for reports that would have all the "columns" I need for that report and then filter to them and just pull those fields dynamically.
Reports
ReportID int
FieldID int
The fieldID's would be based on the c_FieldName id's and I could turn all required field names (that are in the rows) into headers and run a vast majority of reports based on dynamic sql generated based on the field names. Same applies to all data structured... [Edit from Author] The more I think about this, I could do this with either table structure, which negates the benefits I saw here, as I am adding complexity for no good reason, as pointed out in the comments.
My thought is that it will same me much development time as I can use a pivot table to generate reports and pull data on the fly without much trouble. Updating data will be a bit of a chore, but not that much more than normal. I am creating a C#.NET Website with Visual Studio (hosted on Azure) to allow users to view, update, run reports on the data. Any major drawbacks in this structure? Is this a good idea? Are 125 columns in a Pivot too many? Thanks in Advance!

Set Postgres Column to equal query result

If I had a table of products and another table of manufacturers, and I wanted that table to have a count of products, is there a way in postgres to say "this column equals the number of rows in this other table that meet this condition"?
EDIT: I mean to say that the column value will be automatically calculated. So if I have a table with a column for the number of products that are red, I want this column to consistently equal the number of rows that result from doing select * from products where color='red';, without having to consistently perform that query myself.
You should not store calculated values in an operational database. If it's data warehouse, go ahead.
You can use a view to do the calculation for you.
http://sqlfiddle.com/#!15/0b744/1
You can use a materialized view to increase performance, and refresh it with a trigger on products table.