Setting Database Type in Anylogic - anylogic

I have a product for my model that will go through a series of processing times. I'm assigning the processing times to each product type through an excel database which will be loaded into the model, with each processing time in a separate column (e.g. Product 1: 2.3, 4.8, 9 --> meaning that it takes 2.3-time units for process 1, 4.8-time units at process 2 and so on)
Currently, I am storing all the processing time in a List(Double) within my product (e.g. v_ProcessTime = [2.3, 4.8, 9]). However, I face an error when some columns contain purely integers instead of double values (The column value type will be recognised as integers and Anylogic prompts an error that I can't write an integer to a double list). The only workaround currently is to change the column type of the database to Double values manually.
Is it possible to use Java coding to change the value type of databases or any way to bypass this issue?

unfortunately, how the database recognize the type is out of your control... and yeah if you can't change the source itself with at least 1 value that is not an integer, then your only choice is to change the database value manually.
nevertheless, to use them in a list, you can just transform an integer into a double like this
(double)intVariable

Related

DECIMAL Types in tsql

I have a Table with about 200Mio Rows and multiple Columns of Datatype DECIMAL(p,s) with varying precision/scales.
Now, as far as i understand, DECIMAL(p,s) is a fixed size column, with a size depending on the precision, see:
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-ver16
Now, when altering the table and changing a column from DECIMAL(15,2) to DECIMAL(19,6), for example, i would have expected there to be almost no work to be done on the side of the SQL-Sever as the required bytes to store the value are the same, yet the altering itself does take a long time - so what exactly does the server do when i execute the alter statement?
Also, is there any benefit (other than having constraints on a column) to storing a DECIMAL(15,2) instead of, for example, a DECIMAL(19,2)? It seems to me the storage requirements would be the same, but i could store larger values in the latter.
Thanks in advance!
The precision and scale of a decimal / numeric type matters considerably.
As far as SQL Server is concerned, decimal(15,2) is a different data type to decimal(19,6), and is stored differently. You therefore cannot make the assumption that just because the overall storage requirements do not change, nothing else does.
SQL Server stores decimal data types in byte-reversed (little endian) format with the scale being the first incrementing value therefore changing the definition can require the data to be re-written, SQL Server will use an internal worktable to safely convert the data and update the values on every page.

Setting the column data type in databases

Error for written code Current model imports certain parameters from an excel file. Hoping to allow users to override the existing values in the database through an editbox. However, I'm faced with the error (shown in attached image). The imported data is column type is in integer type, while the set function requires input of double type. I've tried placing (double) db_parameters.duration_sec and it fails too. Is there any way to replace an imported data to the data type that is required? Will not want to manually change the data type under the database fields as I may need to re-import the excel sheet from time to time which will auto reset the columns back to integer type. Thanks!
Your query should look like this:
update(db_parameters)
.where(db_parameters.tasking.eq("Receive Lot"))
.set(db_parameters.duration_sec, (int)v_ReceiveLot)
.execute();
the (int) has to be on the paremeter not on the column.

JVM: Is it safe to store a BigDecimal as a double in a database?

I am working on an application that requires monetary calculations, so we're using BigDecimal to process such numbers.
I currently store the BigDecimals as a string in a PostgreSQL database. It made the most sense to me because I now am sure that the numbers will not lose precision as opposed to when they are stored as a double in the database.
The thing is that I cannot really do a lot of queries for that (i.e 'smaller than X' on a number stored as text is impossible)
For numbers I do have to perform complex queries on, I just create a new column value called indexedY (where Y is the name of the original column). I.e I have amount (string) and indexedAmount (double). I convert amount to indexedAmount by calling toDouble() on the BigDecimal instance.
I now just do the query, and then when a table is found, I just convert the string version of the same number to a BigDecimal and perform the query once again (this time on the fetched object), just to make sure I didn't have any rounding errors while the double was in transit (from the application to DB and back to the application)
I was wondering if I can avoid this extra step of creating the indexedY columns.
So my question comes down to this: is it safe to just store the outcome of a BigDecimal as a double in a (PostgreSQL) table without losing precision?
If BigDecimal is required, I would use a NUMERIC type with as much precision as you need. Eg NUMERIC(20, 20)
However if you only needs 15 digits of precision, using a double in the database might be fine, in which case it should be fine in Java too.

Is there any way for Access 2016 to sort the numbers that are part of a "text" data type formatted field as though they are numeric values?

I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).

Play!Framework 2 Java model string field length annotations

To achieve best performance and validation convenience, which of these annotations are needed for a String field?
database: MySQL
A field to store district name
#Column(length=50) // javax.persistence.Column
Is this going to be converted to varchar(50)? Or I need this one specifically:
#Column(columnDefinition='varchar(50)')
And another two annotations
#MaxLength(50) // play.data.validation.Constraints.MaxLength
#Length(max=50) // com.avaje.ebean.validation.Length, is this one useful or not required anyway?
public String districtName;
I think I need #Column(length=50) for definition and #MaxLength(50) for validation at same time? Or one of these two will imply the other one automatically?
Thanks.
As far as I know, when we mark String variable with these annotation:
#javax.persistence.Column(length=50)
#javax.persistence.Column(columnDefinition='varchar(50)'). Note: I am use postgreSQL, and this will create column definition with character varying data type
#com.avaje.ebean.validation.Length(50)
the three annotations above has the same purpose. Those will create column definition with character varying data type and length of 50 characters on database.
Without the #Constraint.MaxLength(50), you will get exception like below when you entered input value whose length greater than 50:
Execution Exception
[ValidationException: validation failed for: models.TheModel]
I think, there should be a way to handle above exception, but honestly I don't know how to do that until now.
Advice
My advice for you is to choose one out of the 3 annotations above (It is your preference) with the use of anotation #Constraint.MaxLength(50). For me, it is the easiest and the simplest way, and you can easily make the form using play framework scala-template-helper.