Combining columns in Qlik - qliksense

I have an Excel sheet that has two seperate columns of data that I need combined for my table in Qlik. I know it would be easy to combine the two in the Excel document, but because this is a data feed I would prefer not to do any extra work. One column has the first name and the other the last. Thank you.
I tried to concat but that did not work.

It sounds like what you're trying to achieve is string concatenation, where you'd combine two or more strings together. It'd be the same concept for fields, as long as their values can be coerced to a string type. In Qlik, this is done very simply by using the ampersand & operator. You can use this in three places:
Data Manager
If done in the Data Manager, you are creating a new calculated field. This can be done by editing the table that you're loading in from Excel, selecting Add field >> Calculated field, and then using an expression like this:
first_name & ' ' & last_name
What that expression is doing is taking the field first_name and concatenating it's values with a space ' ' and then concatenating the values of the last_name field.
So your new field, which I'm naming full_name here, would look like this:
first_name
last_name
full_name
Chris
Schaeuble
Chris Schaeuble
Stefan
Stoichev
Stefan Stoichev
Austin
Spivey
Austin Spivey
Here's what the data manager would look like:
Then after you load the data, you will have a new field with the combined names:
Data Load Editor
Doing this in the Data Load Editor will also result in a new field and is the exact same expression (see line 6):
Chart expression
The other option you have is to use this expression "on-the-fly" in a chart without creating a new column in the app data model like the first two options. Just use that same expression from above in a chart field expression and you'll get the same result:

Related

How can I alias labels (using a query) in Grafana?

I'm using Grafana v9.3.2.2 on Azure Grafana
I have a line chart with labels of an ID. I also have an SQL table in which the IDs are mapped to simple strings. I want to alias the IDs in the label to the strings from the SQL
I am trying to look for a transformation to do the conversion.
There is a transformation called “rename by regex”, but that will require me to hardcode for each case. Is there something similar with which I don't have to hardcode for each case.
There is something similar for variables - https://grafana.com/blog/2019/07/17/ask-us-anything-how-to-alias-dashboard-variables-in-grafana-in-sql/. But I don't see anything for transformations.
Use 2 queries in the panel - one for data with IDs and seconds one for mapping ID to string. Then add transformation Outer join and use that field ID to join queries results into one result.
You may need to use also Organize fields transformation to rename, hide unwanted fields, so only right fields will be used in the label at the end.

Mapping Data Flows Dynamic Column Updates

I have a text input source. This has over 100 columns so I won't show all of them here - a cut-down view of the data would be:
CustomerNo
DOB
DOD
Status
01418495
01/02/1940
NULL
1
01418496
01/01/1930
NULL
1
The users want to be able to update/override any of these columns during processing by providing another input text file containing the PK (CustomerNo) and the key/value pairs of the columns to be updated e.g.
CustomerNo
Variable
New Value
01418495
DOB
01/12/1941
01418496
DOD
01/01/2021
01418496
Status
0
Can this data be used to create dynamic columns somehow that update the customer records regardless of the columns they want to update - in the example above this would result in:
CustomerNo
DOB
DOD
Status
01418495
01/02/1941
NULL
1
01418496
01/01/1930
01/01/2021
0
I have looked at the documentation but don't see any examples of how something like this could be achieved? Thanks in advance for any advice.
You would use a technique similar to what I describe in this video: https://www.youtube.com/watch?v=q7W6J-DUuJY. What I've done is created a file with rules that have expressions and then apply those rules dynamically inside of my data flow.
The key to make this work is using the expr() function to dynamically evaluate the expression from the external file.

kdb - how generate HEX colour code as string or symbol

I would like to create a column on an in-memory table that generates a colour HEX code based on a person's name (another column). A quick google didn't give much so wondered if any pointers can be given here.
e.g
update colour: <some code and use username col as input> from table
In kdb+ you can run a function on a column via an update statement but there are slight differences depending on whether the function is vectorised or not. If vectorised:
update colour:{<some code>}[username] from table
update colour:someFunction[username] from table
If not vectorised then an iterator like each ' is required
update colour:{<some code>}'[username] from table
update colour:someFunction'[username] from table
This function will generate hex codes from the first 3 characters of a string.
q)hex:{a:i-16*j:(i:`int$3#x)div 16;"0123456789ABCDEF"raze(j-16*j div 16),'a}
q)hex"Hello"
"48656C"
q)update colour:hex'[username] from table

How do I match variables from FreeRADIUS in PostgreSQL with the LIKE operator?

In a PostgreSQL query, executed by FreeRADIUS, I want to do something similar to (the table names and values are just examples):
SELECT name
FROM users
WHERE city LIKE '%blahblah%';
but there is a catch: the blahblah value is contained in a FreeRADIUS variable, represented with '%{variable-name}'. It expands to 'blahblah'.
Now my question is: How do I match the %{variable-name} variable to the value stored in the table using the LIKE operator?
I tried using
SELECT name
FROM users
WHERE city LIKE '%%{variable-name}%';
but it doesn't expand correctly like that and is obviously incorrect.
The final query I want to achieve is
...
WHERE city LIKE '%blahblah%';
so it matches the longer string containing 'blahblah' stored in the table, but I want the variable to expand dynamically into the correct query. Is there a way to do it?
Thanks!
Wild guess:
Assuming that FreeRADIUS is doing dumb substitution across the entire SQL string, with no attempt to parse literals, etc, before sending the SQL to PostgreSQL then you could use:
SELECT name
FROM users
WHERE city LIKE '%'||'%{variable-name}'||'%';
Edit: To avoid the warnings caused by FreeRADIUS not parsing cleverly enough, hide the %s as hex chars:
WHERE city LIKE E'\x25%{variable-name}\x25';
Note the leading E for the string marking it as a string subject to escape processing.
SELECT name
FROM users
WHERE city LIKE '%%'||'%{variable-name}'||'%%';
Is slightly cleaner. %% is FreeRADIUS' escape sequence for percents.

Split a comma delimited string to a table in BIRT

I'm creating a BIRT report and I need to split a comma delimited string from a dataset into multiple columns in a table.
The data looks like:
256,1400.031,-70.014,1,4.544,0.36,10,31,30.89999962,0
256,1400,-69.984,2,4.574,1.36,10,0,0,0
...
The data is stored this way in the database and I can't change it but I need to be able to display it as a table. I'm new to BIRT, any ideas?
I think the easiest way is to create a computed column in the dataset for each field.
For example if the merged field from database is named "mergedData" you can split it with this kind of expression:
First field (computed column) expression:
var tempArray=row["mergedData"].split(",");
tempArray[0];
Second field:
var tempArray=row["mergedData"].split(",");
tempArray[1];
etc..
Depending on some variables that you did not mention.
If the dataset is stagenent (not updated much or ever). Open the data set with Excel, converiting it from .csv to .xls and save.
Use the Excel as a datasource. Assuming you are using BIRT 4.1 or newer this should work fine.
I don't think there is any SQL code that easily converts .csv