For some reason, when I store values to my database, it insists on making it exactly 6 characters long. So when I insert the string "red" it actually gets stored as "red " (with three extra spaces at the end).
I know what caused this. I created the database field as nchar(6). This was a mistake. But even after changing it to nvarchar(6), it still adds 3 extra spaces to "red". I also tried deleting the table and recreating it from scratch. This did not help. I tried an insert statement with "red" as the value to go into the field in question. This worked. So this suggested to me the problem is in EntityFramework.
So I move to my application. It's an MVC application using EntityFramework. I update the model from the database. I examine the table of interest in the model:
The trouble field is Color. I look at the properties:
I see that "Fixed Length" is set to true. I set it to false. I try again. Still adding extra space. I try setting fixed length to none. Still adding extra space.
What more does a guy have to do to stop it from adding extra space? What other gazillion spots are there to configure?
Try editing the XML directly, or deleting the table from the model, then updating from the database to add it again.
Related
I use SQL Server with ASP.NET Core and EF Core. After each record is added, the identity column's value jumps about by 1000 and creates a gap between current row and the last previously added row.
Questions
Is there any way to prevent this?
How to delete those gaps that have been created before?
If I use GUID for key columns to prevent that issue, is there a problem (performance or each other problem)?
Is it way on the server side that with EF Core could handle it (each some way)?
Thank you in advance for your helps...
For the reason for 1000-value gaps, see Aaron Bertrand's answer
It doesn't really make sense to "want" to delete the gaps. The content of an identity column contains no semantic information. It correlates to nothing "in the world" outside the database. The gaps are as meaningless as the values themselves.
I don't see how a uniqueidentifier would "prevent" that issue. A uniqueidentifier may be "meaningfully" sortable (if you use newsequentialid()), but there's no sense in which any particular value is "one more" than a previous value.
You can certainly try to build your own key generating algorithm that does not produce gaps, but you will run into concurrency issues (also mentioned by Mr Bertrand).
workaround trick:
CREATE OR ALTER TRIGGER TGR_Transaction_Identity_Fix
ON [dbo].[TBL_Transaction]
FOR INSERT
AS
BEGIN
DECLARE #RESEEDVAL INT
SELECT #RESEEDVAL = MAX(TransactionId) FROM [dbo].[TBL_Transaction]
DBCC CHECKIDENT([TBL_Transaction], RESEED, #RESEEDVAL)
END
this triger will reset identity on each insert
I’ve got problematic use case:
I’ve got a field something_10_somotherthing in my database, and it seems that extbase experiences some issues mapping $something10Someotherthing to this field, though I don’t know why.
I’m importing the data from a json file into my mysql database 1:1 and mapping it with extbase afterwards, so I’m not that flexible on field names (but I could implement a mapping in my import if needed). I tried mapping the field using the techniques from the documentation (https://docs.typo3.org/typo3cms/ExtbaseFluidBook/8.7/6-Persistence/4-use-foreign-data-sources.html) but even when adding this to ext_typoscript_setup.txt and ext_typoscript_setup.typoscript, nothing happened. Any thoughts?
I think I’ve got an issue because of the 10 and that extbase might not be able to map it properly to a lowerCamelCase name but really unsure about it.
Thanks for any help!
Hi as your property can not automatically maped bacause of the _10_ part. You havr two options
Define an explicit property mapping see https://docs.typo3.org/typo3cms/ExtbaseFluidBook/6-Persistence/4-use-foreign-data-sources.html
Rename your fieldname to something10_somotherthing
Explanation: expbase uses upercase letters as seperators to generate the field name. And numbers are lowercase. So it does not insert an underscore seperator thus ending with field name something10_somotherthing
I have a form based on a multiple-tables query. As some fields from different tables have the same names, I must add the corresponding table's name. However, there are hyphens in the tables' names as well as in the fields' names (both inherited from foreign Excel tables).
In VBA there is no problem: [Table-1.Field-1] always works well (also in SQL queries). However, when I write this in drafting mode as data source into the form, Access "thinks" this would be wrong and replaces it automatically with [[Table-1].[Field-1]] - with the result that the form then displays the error #Name?. I tried to replace [] by quotes but without any success.
Note that there is no error when only the table or only the field has a hyphen: both MyTable.[Field-1] and [Table-1].Myfield are accepted by the form.
The correct syntax should be:
[Table-1].[Field-1]
Or, using bang notation:
[Table-1]![Field-1]
Meanwhile I found not a true answer, but nevertheless a quite satisfactory workaround by adding following calculated field into the query:
MyWorkAround: [Table-1.Field-1]
Then I can simply refer to [MyWorkAround] in the corresponding form's field to avoid the form's bug. But this isn't really very elegant !
Note that I always use [ … ] around fields, even where not necessary. This practice helps avoiding a lot of errors.
Here is my issue. I am currently forced to use Access and I am writing some generic validation that I can add to forms.
It was all going well and catching empty fields in form_error based on the error "You tried to assign the Null value to a variable that is not a Varient data type"
All of my required varchar fields are NOT NULL.
Unfortunately if a textbox has a control source to a large varchar DB field it behaves differently. I can't remember the size threshold but assume this behaviour difference would be equivalent to text vs. memo in an access table).
Basically, if you delete the contents of a small text box control it attempts to write null and the error is caught. All good.
If you do the same on a text box linked to a larger varchar, or memo database field then it writes a blank string which is considered valid.
I have confirmed this by changing the db Schema between varchar(50) and varchar(256), updating the linked table in Access and restarting Access for good measure.
I am hoping someone can point me to a property to set or some tiny piece of generic code that can be added to make all text boxes behave the same regarding writing NULL/Empty string when they are empty regardless of the size of the DB field they are connected to.
Just to note that also the box behaves differently on insert or edit. If not filled on insert it does leave the DB entry as null.
That's pretty much the way you have to do it. You could set up a "Validation Rule" on each text field, but again that would require hunting down all the text controls.
You can make that job easier. If you check the Object Dependencies of the tables, you can get a list of all the forms (and queries, etc.) involved. Then you can be sure you have hit each one.
I'm trying to copy some field values to a duplicate database. One record at a time. This is used for history and so I can delete some records in the original database to keep it fast.
I don't want to manually save the values in a variable because there are hundreds of fields. So I want to go to the first field, save the field name and value and then go over to the other database and save the data. Then run a 'Go to Next Field' and loop through all the fields.
This works perfectly, but here is the problem: When a field is a calculation you cannot tab into it and therefore 'Go to Next Field' doesn't work. It skips it.
I though of doing a 'Go to Object' but then I need to name all the objects and I can't find a script to name objects.
Can anyone out there think of a solution?
Thanks!
This is one of those problems where I always found it easier to do an export/import.
Export all the data you want from the one database, and then import it into the other database. All you need to do is:
Manually specify which fields you want to copy
Map the data from the export to the right fields in the new database/table
You can even write a script to do these things for you.
There are several ways to achieve this.
To make a "history file", I have found there are several cases out there, so lets take a look.
CASE ONE
Single file I just want to "keep" a very large file with historical data, because I need to erease all data in my Main file.
In this case, you should create a "clone" table (in the same file ore in other file, is the same). Then change any calculation field to the type of the calculation result (number, text, date, an so on...). Remove any "auto entered value or calculation from any field, like auto number, auto creation date, etc..). You will have a "Plain Table" with no calculations or auto entered data.
Then add a field to control duplicate data. If you have lets say an invoice number (unique) for each record, you can do this to achieve this task. But if you do not have a unique field that identifies the record as unique, then you have to create one...
To create such a field, I recommed to add a new field on the clone table and set as an aunto entered calculation and make a field combination that is unique... somthing like this: invoiceNumber & "-" & lineNumber & "-" " & date.
On the clone table make shure that validation is set up for "always", and no empty values allowed and that this value is unique.
Once you setup the clone table... then you can import your records, making sure that the auto enty option is on. Yo can do it as many times as you like, new records will be added and no duplicates.
If you want, can make a Script to do the move to historical table all the current records before deleting them.
NOTE:
This technique works fine when the data you try to keep do not have changes over time. This means, once the record is created is has no changes.
CASE TWO
A historical table must be created but some fields are updated.
In the beginnig I thougth a historical data, never changes. In some cases I found this is not the case, like the case I want to track historical invoices but at the same time, keep track if they are paid or not...
In this case you may use the same technique above, but instead of importing data... you must update data based on the "unique" fields that identifiy the record.
Hope this technique helps
FileMaker's FieldNames() function, along with GetField() can give you a list of field names and then their values