hasura: treat empty strings as nulls - postgresql

I'm using Hasura as backend for react app.
In my SQL table I set up a field, let's say "identification_number" as [text, unique, nullable].
So identification_number might be null, but if it is not null, it should be unique across the table.
But sometimes fronted sends me data with an empty string as a value of my field.
{
name: "asdfomw",
year: "2010",
identification_number: "",
description: "some description"
}
In this situation Hasura successfully adds record to database.
But if an empty string is present in the payload for some other record, I get an error of "unique violation", because there are two records with equal identification_number having values of "".
So I think of two possible solutions for this problem:
Automatic conversion of empty-strings to nulls.
Adding constraints to Hasura scheme, not allowing empty strings for certain columns.
And I don't have answers for any of this questions.

You can add constraints like that in permissions if you'd like. That might be a bit too hacky of a solution.
The real question here is why are you allowing the frontend to upload values with an empty string if the backend does not accept that? Fail early. In your front end you can do the conversion or throw an error.
If you must do the conversion server side, you can write an event trigger to listen for insert/updates to this table and convert those values as they are inserted. I imagine you might still have issues if you try to insert several in one transaction with the same empty string value. You can also run into race conditions but I don't know for certain...

Related

MongoDb Best Practice | Insert "null" fields

I have a question regarding best practices to insert Documents in MongoDb.
In my data source the key "myData2" can be null or a string. Should I add "myData2" as null to my database or is it better to leave the value out if not defined? What is the "clean" way to deal with this?
[{
"myData1": "Stuff",
"myData2": null
}]
Since MongoDB permits fields to be added to documents at any time, most (production) applications are written to handle both of the following cases:
A new field is added to the code, but the existing data doesn't have it, and it needs to be added over time to the existing data either on demand or as a background process
A field is no longer used by the code but still contains values in the database
What would your application do if the field is missing, as opposed to if it's set to the null value? If it would do the same thing, then I suggest not setting fields to null values for two reasons:
It streamlines the code because you only need to handle one possibility (missing field) on the reading side, instead of two (field missing or null)
It requires less storage space in the database.

FileMaker - Getting Data From Another Table with Multiple Field Restrictions

I can't think of a better title, so feel free to make a suggestion once you understand the issue.
I was given a table to work with that I need to call from another table:
Name
Month
Type
Value
For each record in the main table I need to pull one "Value" that corresponds to it. What it is will be determined by all three of the other fields. So for example, if a record in the main table is:
Name:
Google
Date:
3\17\2016
Type:
M
Then I need to pull the value for the record in the other table where the Name is "Google", the month is "3", and the type is "M".
I was able to do this successfully (if slowly) using an ExecuteSQL command in a calculation field, with a ton of nested If statements for the names (I have yet to figure out how to input the record's data directly into the ExecuteSQL statement, it breaks when I try). I would prefer to just grab the data directly. I can't switch over to the other layout because I need to see all of the records at once. I can't do a simple relationship because there isn't a real relationship, it's like there are three foreign keys working in tandem and I only know how to use one to call the data.
Any idea on how to do this more simplistically?
Some ideas I've had but not sure if it will work:
Using a calculation field as a related field to dynamically point to the row by code (concatenate the three relevant fields into a type of code). Not sure if you can connect two tables by a calculation field.
Doing that same thing when calling the data into the table in the first place, adding a code to create a single primary key.
Here are my relationships:
I can't do a simple relationship because there isn't a real
relationship, it's like there are three foreign keys working in tandem
and I only know how to use one to call the data.
Simply define a relationship with three predicates - i.e. three pairs of match fields.

CouchDB query using :group_level and :key

I am using CouchDB 1.1.1 for my web app-- everything has worked great so far (saving/retrieving documents, saving/querying views, etc) but I am stuck on a querying a view for a particular key at a particular group level.
The map function in my view outputs keys with the following format: ["Thing 1" "Thing 2"]. I have a reduce function which works fine and outputs correct values for group level 1 (ie by "thing 1") and by group level 2 (ie by "thing 2").
Now-- when I query couchdb I CAN grab just one particular key when I set reduce = true (default), group_level=2 (or group=true, which are the same in this case since I only have 2 levels) and key = "desiredkeyhere." I can also query multiple keys with keys = ["key1" "key2"].
HOWEVER-- I really want to be able to grab a particular key for group_level=1, and I cannot get that to work. It seems to return nothing, or if use a post request, it returns everything. Never just the one key that I need.
Heres a link the the couchdb http view api (querying options) that I've been using:
http://wiki.apache.org/couchdb/HTTP_view_API#Querying_Options
It contains the following sentence:
"Note: Multiple keys request to a reduce function only supports group=true and NO group_level (identical to group_level=exact). The resulting error is "Multi-key fetchs for reduce view must include group=true""
Im not sure if this means that I cannot do what I have described above (grab a particular key for a particular group_level). That would seem like a huge problem with couchdb, so Im assuming Im doing something wrong.
Any ideas? Thanks
I have hit this too. I am not sure if it is a bug, though.
Try using your startkey and endkey in the normal (2-item) format. You want a result for ["Thing 1", *] (obviously pseudocode, the star represents anything). Reducing with group_level=1 will boil all of that down to one row.
So, query basically everything in the Thing 1 "namespace," so to speak. Since the "smallest" value to collate is null and the "greatest" value is the object {}, those make good bookends for your range.
?group_level=1&startkey=["Thing 1",null]&endkey=["Thing 1",{}]
Does that give you the result you need?

How do I make large text boxes use null instead of blank string when empty

Here is my issue. I am currently forced to use Access and I am writing some generic validation that I can add to forms.
It was all going well and catching empty fields in form_error based on the error "You tried to assign the Null value to a variable that is not a Varient data type"
All of my required varchar fields are NOT NULL.
Unfortunately if a textbox has a control source to a large varchar DB field it behaves differently. I can't remember the size threshold but assume this behaviour difference would be equivalent to text vs. memo in an access table).
Basically, if you delete the contents of a small text box control it attempts to write null and the error is caught. All good.
If you do the same on a text box linked to a larger varchar, or memo database field then it writes a blank string which is considered valid.
I have confirmed this by changing the db Schema between varchar(50) and varchar(256), updating the linked table in Access and restarting Access for good measure.
I am hoping someone can point me to a property to set or some tiny piece of generic code that can be added to make all text boxes behave the same regarding writing NULL/Empty string when they are empty regardless of the size of the DB field they are connected to.
Just to note that also the box behaves differently on insert or edit. If not filled on insert it does leave the DB entry as null.
That's pretty much the way you have to do it. You could set up a "Validation Rule" on each text field, but again that would require hunting down all the text controls.
You can make that job easier. If you check the Object Dependencies of the tables, you can get a list of all the forms (and queries, etc.) involved. Then you can be sure you have hit each one.

Insert record in table if does not exist in iPhone app

I am obtaining a json array from a url and inserting data into a table. Since the contents of the url are subject to change, I want to make a second connection to a url and check for updates and insert new records in y table using sqlite3.
The issues that I face are:
1) My table doesn't have a primary key
2) The url lists the changes on the same day. Hence, if I run my app multiple times, when I insert values in my database, I get duplicate entries. I want to keep a check for the day duplicated entries that should be removed. The problem can be solved by adding a constraint, but since the url itself has duplicated values, I find it difficult.
The only way I can see you can do it if you have no primary key or something you can use that is unique to each record, is when you get your new data in you go through the new entries where for each one you check if the exact same data exists in the database already. If it doesn't then you add it, if it does then you skip over it.
You could even do something like create a unique key yourself for each entry which is a concatenation of each column of the table. That way you can quickly do the check for if the entry already exists in the database.
I see two possibilities depending on your setup:
You have a column setup as UNIQUE (this can be through a PRIMARY KEY or not). In this case, you can use the ON CONFLICT clause:
http://www.sqlite.org/lang_conflict.html
If you find this construct a little confusing, you can instead use "INSERT OR REPLACE" or "INSERT OR IGNORE" as described here:
http://www.sqlite.org/lang_insert.html
You do not have a column setup as UNIQUE. In this case, you will need to SELECT first to verify for duplicate data, and based on the result INSERT, UPDATE, or do nothing.
A more common & robust way to handle this is to associate a timestamp with each data item on the server. When your app interrogates the server it provides the timestamp corresponding to the last time it synced. The server then queries its database and returns all values that are timestamped later than the timestamp provided by the app. Then it also returns a new timestamp value for the app to store, to use on the next sync.