Counting multiple values from one column in Tableau - tableau-api

I have a field from the data I am reading in that can contain multiple values. They are essentially tags.
For example, there could be a column called "persons responsible". This could read "Joe; Bob; Sue" or "Sue" for a given row.
Is it possible from within Tableau to read these in as separate categories? So that for this sample data:
Project | Persons
---------------------------
Zeta | Bob; Sue; Joe
Enne | Sue
Doble Ve | Bob
There could be a count of Bob (2), Sue (2), Joe (1)?
I am working on getting better data inputs, but I was wondering if there was a temporary solution at this level.

I would definitely work towards normalizing your schema.
In the meantime, there is a workaround that is almost reasonable if there is a small set of possible values for the tags (persons in your example).
If Bob, Sue and Joe are the only people in the system, you can use the contains() function to define a boolean calculated field for each person -- e.g. Bob_Is_Responsible = contains(Persons, 'Bob"), and similar fields for Sue and Joe. Then you could use those as building blocks, possibly with sets, to break the data up in different ways.
Of course, this approach gets cumbersome fast if the number of tags grows, or if it is unconstrained. But you asked for a temporary solution ...

If the number of elements is small, you write and union several queries with each one having the project and nth element.
Ideally, you'd reshape your data to look like this either in the database or with the above mentioned union technique. Then you could count() or countd() the elements by project.
Project | Persons
---------------------------
Zeta | Bob
Zeta | Sue
Zeta | Joe
Enne | Sue
Doble Ve | Bob

Related

How to do a name similarity using clustering

I have a very big -super big- database of names.
The task is to find all the similar names (of the same person per se) despite some diffrences like :
first name, second name inversed --> John Doe & Doe John
two names or more (same ones) with light changes, maybe some
letters misplaced or something else --> Jonh Doe & John Deo
two names with some letters added --> Johhn Doe & Johnn Doee &
John Doe
names where another middle name inserted --> John Blair Campbell Doe & John Blair Doe
And so on..
I tried using the classical methods like soundex and leveshtein but the results were not very good, had results like : Amine depi and Amina dope are in the same group while they're diffrent
and It would take very long to perform the task on just a fraction on the data, as for my database, it would directly crash after a long time
I also thought of using another approach like cosine which uses numerical values and I though of finding a way of representing the names in a numerical way, or convert them (something like word2vec), I actually though of using directly word2vec with the whole database of namems as the text, but as expected it didn't work. Tried to codify the names in a low level way, like code ASCII for exemple, but the results weren't good neither.
So I thought of Clustering.
So I tried using DBSCAN. I found a way to use DBSCAN clustering with a custom distance metric and used leveshtein distance. (If you ask me why DBSCAN? It is because I don't know the numbers of similar groups of names which are in the database in the beginning)
I did have some results, but very poor performance overall. It would either give the same exact ones, John Doe and John Doe int he same cluster, or nothing at all, and would even skip some exact ones.
Do you have a suggestion for performing this task ? preferbly using clsutering or another smart way since the database is very big (more than 500 000 line and up to millions ) so I cannot iterate alot.
I am open to suggestions or propositions !
Especially if you worked on something like this previously or similar to this, Thank you in advance.
Try AgglomerativeClustering.
Sample code:
clustering = AgglomerativeClustering(
n_clusters=None,
distance_threshold=0.3 # smaller threshold meaning more strict similarities, and more clusters
).fit(your_vectorized_name_list)
print(f'total clusters: {clustering.n_clusters_}')

Atomic values / divisibility to reach 1NF

After reading about normalization I am unsure of how to interpreter the 1 NF requirements
According to wikipedia, something is in first normal form, if the "domain of each attribute contains only atomic indivisible values"
My question is: Who decides what is indivisible or not?
You may divide a date datatype into year, month, day, second, nanoseconds. You may aswell divide an adress into the exact latitude coordinates. When can you really be sure that you have reached 1NF?
Would this table be considered 1NF?
fullName
fullAdresss
Joe Zowesson
87th Victoria Street London EC96 1MB, 14584
Mason Hamburg
47th Jeremy Street London EC26 1MB, 13584
Dedrik Terry
27th Burger Street London EC16 1MB, 17584
My interpretation here is that the value Joe Zowesson is indivisible in regards to the column fullName. And that both zip code, street number and street name is atomic in relation to the column name fullAddress.
I am almost certain that I am in the wrong, but I can not yet understand why.
The question is in regards to an upcoming exam, where I will need to "proove" which normal form something currently is in. Something that I find very hard depending on how you interpreter the word atomic.
You have misunderstood the concept of 1NF basically. By atomic value, it is meant that when you have a column for Name, you should not store any other values alongside it. In other words, the column intended for the Name should not store ID, Address or anything else together with Name, so that when you query the column Name you get only Name, and not name with Id or Address. And Name can be in any form you want whether it be First name + Last name or First name + Last name + Middle name + Previous name.
The decision of whether you need separate columns for the related data should be made during design. Let's suppose you have table Student:
StudentId
FullName
Address
Average grade
1
John Done
New York, US
3.4
2
Robert Bored
New York, US
0
3
Student LName
Dallas, US
1
4
Another LName
Munich, Germany
2
In this case, it means that you do not write queries and don't need data based on First name, Last name separately, but you need all at once for example:
SELECT FullName
FROM Student
WHERE StudentId = 1;
John Done
And when you need First name, Last name separately, you decompose them into several columns, for example:
StudentId
FullName
LastName
Address
Average grade
1
John
Done
New York, US
3.4
2
Robert
Bored
New York, US
0
3
Student
LName
Dallas, US
1
4
Another
LName
Munich, Germany
2
And your queries might look like this:
SELECT LastName, AverageGrade
FROM Student
WHERE AverageGrade >= 1 AND FirstName != 'John';
The result will be:
| LastName | AverageGrade |
---------------------------
| LName | 1 |
| LName | 2 |
Or something like this maybe:
UPDATE Student
SET AverageGrade = 4
WHERE LastName = 'LName' AND FirstName != 'Student'
Basically, the decision depends on how you manipulate the data and in which form you need it.
To sum it up. Whether the relation is in 1NF or not depends on what values you're trying to store on this table, as I mentioned above, one column should store only one type of value, e.g ID, Address, Name, etc. And the decision of how your columns' values will look depends on the design and how you NEED TO STORE the data. If you do not need to query fistname, middlename, lastname, secondname separately, then what you can do is just save all of them in one column FullName and it will still be in 1NF. But if you need them separately, you can store them in separate columns, and again it will still be in 1NF, but it might violate other rules.
Here are some tutorials you might find useful: https://www.studytonight.com/dbms/first-normal-form.php
Let the application, and how it will be used, guide you as to what data should be split further into additional fields (or not).
For example;
If, in your application, you are constantly splitting first name from last name so that you can say "Hi Joe" on correspondence, you should split fullName into two fields. Conversely, If you had two fields firstName and lastName, and were always concatenating them so that you could correctly address an envelope, it would make more sense to have those two fields stored in a single column in your table.
In practice, it is not uncommon for a database to show some de-normalization with the above example given how common both scenarios are but the risk is that they get out of sync if someone updates first name (for example) but doesn't update fullName.
Consider things like how you will force your users to follow a certain pattern if you decide to go with a single column fullName. How would you prevent "Smith, Joe" if your application needed "Joe Smith"?
Dates are another good example and again, whether you split the parts into separate columns depends on how they will be used.
A datetime field which indicates when a row was inserted probably doesn't need to be split out, but if you had many queries which were only interested in the year (for example), it might make sense to split it out.
This only scratches the surface which is why this answer is more about how to think about the underlying problem. Yes normalizing your database is important for all kinds of reasons, but how far you go with it depends on how your data will be used at the end of the day.

Is this table in first normal form?

I am currently studying SQL normal forms.
Lets say I have the following table the primary key is userid
userid FirstName LastName Phone
1 John Smith 555-555
1 Tim Jack 432-213
2 Sarah Mit 454-541
3 Tom jones 987-125
The book I'm reading states the following conditions must be true in order for a table to be in 1st normal form.
Rows contain data about an entity.
Columns contain data about attributes of the entities.
All entries in a column are of the same kind.
Each column has a unique name.
Cells of the table hold a single value.
The order of the columns is unimportant.
The order of the rows is unimportant.
No two rows may be identical.
A primary key Must be assigned
I'm not sure if my table violates the
8th rule No two rows may be identical.
Because the first two records in my table
1 John Smith 555-555
1 Tim Jack 432-213
share the same userid does that mean that they are considered
duplicate rows?
Or does duplicate records mean that every peace of data in the row
has to be the same for the record to be considered a duplicate row
see example below?
1 John Smith 555-555
1 John Smith 555-555
EDIT1: Sorry for the confusion
The question I was trying to ask is simple
Is this table below in 1st normal form?
userid FirstName LastName Phone
1 John Smith 555-555
1 Tim Jack 432-213
2 Sarah Mit 454-541
3 Tom jones 987-125
Based on the 9 rules given in the textbook I think it is but I wasn't sure that
if rule 8 No two rows may be identical
was being violated because of two records that use the same primary key.
The class text book and prof isn't really that clear on this subject which is why I am asking this question.
Or does duplicate records mean that every peace of data in the row has to be the same for the record to be considered a duplicate row see example below?
They mean that--the latter of your choices. Entire rows are what must be "identical". It's ok if two rows share the same values for one or more columns as long as one or more columns differ.
That's because a relation holds a set of values that are tuples/rows/records, and set is a collection of values that are all different.
But SQL & some relational algebras have different notions of "identical" in the case of NULLs compared to the relational model without NULLs. You should read what your textbook says about it if you want to know exactly what they mean by it. Two rows that have NULL in the same column are considered different. (Point 9 might be summarizing something involving NULLs. Depends on the explanation in the book.)
PS
There's no single notion of what a relation is. There is no single notion of "identical". There is no single notion of 1NF.
Points 3-8 are better described as (poor) ways of restricting how to interpret a picture of a table to get a relation. Your textbook seems to be (strangely) making "1NF" a property of such an interpretation of a picture of a table. Normally we simply define a relation to be a certain thing so if you have one then it has to have the defined properties. Then "in 1NF" applies to a relation & either means "is a relation" & isn't further used or it means certain further restrictions hold. A relation is a set of tuples/rows/records, and in the kind of relation your 3-8 describes they are sets of attribute/column/field name-value pairs & the values paired with a name have to be of the type paired with that name in some schema/heading that is a set of name-type pairs that is defined either as part of the relation or external to it.
Your textbook doesn't seem to present things clearly. It's definition of "1NF" is also idiosyncratic in that although 3-8 are mathematical, 1 & 2 are informal/heuristic (& 9 could be either or both).

Lookup in LibreOffice Calc to match partial string of the search criterion

I'm putting my banking information in a LibreOffice Calc spreadsheet.
I have columns as follows, that I imported from my bank account:
Date | USD | Description | Category
----------+-------+---------------------------------------------+----------
2/28/2019 | 44.00 | POS 0123 2345 123456 FRED-MEYER #02 | groceries
2/27/2019 | 2.50 | PANDA EXPRESS #123 TIGARD OR - 123546789012 | lunch
These descriptions are very unpredictable, they're whatever merchants want them to be, but they can tell me useful information about what I spent my money on. I've used this information to manually enter the categories. But I'm looking to automate this for common things.
So, I created a separate sheet with lookup values, like so:
Expression | Category
-----------+----------
FRED-MEYER | groceries
PANDA | lunch
I am looking for a formula in the "Category" column of the first sheet to automatically determine categories, based on the lookup table in the second sheet. (Obviously I don't plan for the lookup table to be exhaustive, but whatever I don't put in there, I can enter in the first sheet manually, thus overwriting the formulas.)
I had this working fine in Excel using a nifty construct of SEARCH and MATCH. (I don't even understand it anymore and I don't have Excel to check.) But since I'm now a Linux user, I'm trying to use LibreOffice, and I've not been able to make this work with formulas. I tried SEARCH, MATCH, LOOKUP, VLOOKUP, FIND, with and without regexes and with different options on/off. But no success so far.
I think this is very similar to this question, though it was only answered for Excel (I'm using Calc).

Transpose a database table to dynamic column count result

There are several transposing questions on Stackoverflow, but looking at few non of them is really similar to my problem. The main difference being: the have a predefined set of columns.
Let's say my table looks like this:
ID Name Value
---------------
1 Set Mitch
2 Get Jane
3 Push Dave
4 Pull Mike
5 Dummy John
...
I'd like to transpose it to become:
Set Get Push Pull Dummy ...
----------------------------------
Mitch Jane Dave Mike John ...
It looks like you're looking for a "dynamic pivot table". See the example here, or Google that term for more information:
http://www.kodyaz.com/articles/t-sql-pivot-tables-in-sql-server-tutorial-with-examples.aspx
Do you need to do this in the SQL? It seems pretty trivial if you can just do it after you get a SELECT * query into an array you can manipulate at will.