I can copy the contents of one column to another using the sql UPDATE easily. But I need to do it without deleting the content already there, so in essence I want to append a column to another without overwriting the other's original content.
I have a column called notes then for some unknown reason after several months I added another column called product_notes and after 2 days realised that I have two sets of notes I urgently need to merge.
Usually when making a note we just add to any note already there with a form. I need to put these two columns like that, keeping any note in the first column eg
Column notes = Out of stock Pete 040618--- ordered 200 units Jade
050618 --- 200 units received Lila 080618
and
Column product_notes = 5 units left Dave 120618 --- unit 10724 unacceptable quality noted in list Dave 130618
I need to put them together with our spacer of --- without losing the first column's content so the result needs to be like this for my test case:
Column notes = Out of stock Pete 040618--- ordered 200 units Jade
050618 --- 200 units received Lila 080618 --- 5 units left Dave 120618 --- unit 10724 unacceptable quality noted in list Dave 130618
It's simple -
update table1 set notes = notes || '---' || product_notes;
The solution provided by #MaheshHViraktamath is fine, but the problem with simple string concatenation is that if any of the items being concatenated are NULL, the whole result becomes NULL.
Another potential issue is if either field is empty. In that case you might get a result of field a--- or ---field b.
To guard against the first scenario (without putting checks in the WHERE clause) you can use CONCAT_WS like so: CONCAT_WS('---', notes, product_notes). This will combine the two (or however many you put in there) fields with the first parameter, i.e. '---'. If either of those two fields are NULL, the separator won't be used, so you won't get a result with the separator prepended or appended.
There are two issues with the above: if both fields are NULL, the result isn't NULL but an empty string. To handle this case just put it in a NULLIF: NULLIF(CONCAT_WS('---', notes, product_notes), '') so that NULL is returned if both fields are NULL.
The other issue is if either field is empty, the separator will still be used. To guard against this scenario (and only you will know whether it's a scenario worth guarding against, or if this is even desired, based on your data), put each field in a NULLIF as well: NULLIF(CONCAT_WS('---', NULLIF(notes, ''), NULLIF(product_notes, '')), '')
As a result you get: UPDATE your_table SET notes = NULLIF(CONCAT_WS('---', NULLIF(notes, ''), NULLIF(product_notes, '')), '');
Related
I am trying to create a custom template fragment that builds a table of value properties. I started by creating a SQL query fragment that pulls all properties classified by a Value Type. Now I would like to pull in the default (initial) value assigned. I figured out that it's in the Description table of t_xref, with the property guid in the client field, but I don't know how to write a query that will reliably parse the default value out since the string length may be different depending on other values set. I tried using the template content selector first but I couldn't figure out how to filter to only value properties. I'm still using the default .qeax file but will be migrating to a windows based DBMS soon. Appreciate any help!
Tried using the content selector. Successfully built a query to get value properties but got stuck trying to join and query t_xref for default value.
Edited to add current query and image
Value Properties are block properties that are typed to Value Types. I'm using SysML.
This is my current query, I am no SQL expert! I don't pull anything from t_xref yet but am pulling out only the value properties with this query:
SELECT property.ea_guid AS CLASSGUID, property.Object_Type AS CLASSTYPE, property.Name, property.Note as [Notes], classifier.Name AS TYPE
FROM t_object property
LEFT JOIN t_object classifier ON property.PDATA1 = classifier.ea_guid
LEFT JOIN t_object block on property.ParentID = block.Object_ID
WHERE block.Object_ID = #OBJECTID# AND property.Object_Type = 'Part' AND classifier.Object_Type = 'DataType'
ORDER BY property.Name
I guess that Geert will come up with a more elaborate answer, but (assuming you are after the Run State) here are some details. The value for these Run States is stored in t_object.runstate as one of the crude Sparxian formats. You find something like
#VAR;Variable=v1;Value=4711;Op==;#ENDVAR;
where v1 is the name and 4711 the default in this example. How you can marry that with your template? Not the faintest idea :-/
I can't give a full answer to the original question as I can't reproduce your data, but I can provide an answer for the generic problem of "how to extract data through SQL from the name-value pair in t_xref".
Note, this is heavily dependent on the database used. The example below extracts fully qualified stereotype names from t_xref in SQL Server for custom profiles.
select
substring(
t_xref.Description, charindex('FQName=',t_xref.Description)+7,
charindex(';ENDSTEREO',t_xref.Description,charindex('FQName=',t_xref.Description))
-charindex('FQName=',t_xref.Description)-7
),
Description from t_xref where t_xref.Description like '%FQName%'
This works using:
substring(string, start, length)
The string is the xref description column, and the start and length are set using:
charindex(substring, string, [start position])
This finds the start and end tags within the xref description field, for the data you're trying to parse.
For your data, I imagine something like the below is the equivalent (I haven't tested this). It's then a case of combining it with the query you've already got.
select
substring(
t_xref.Description, #the string to search in
charindex('#VALU=',t_xref.Description,charindex('#NAME=default',t_xref.Description)+6, #the start position, find the position of the first #VALU= tag after name=default
charindex('#ENDVALU;',t_xref.Description,charindex('#VALU=',t_xref.Description))
-charindex('#VALU=',t_xref.Description,charindex('#NAME=default',t_xref.Description))-6 #the length, find the position of the first #ENDVALU tag after the start, and subtract it from the start position
),
Description from t_xref where t_xref.Description like '%#NAME=default%' #filter anything which doesn't contain this tag to avoid "out of range" index errors
I am working on a solution that involves merging two queries in Power Query to retrieve a single data table back to Excel. The first query is always populated but the other query comes from an ERP and might be empty (empty table) from time to time.
Appending the two queries involves making the header names the same in the two queries before the appending takes place. As the second query sometimes results in an empty table, the error arises in the steps when Power Query is modifying the header names in the second table (it cannot modify the header names as there are no headers).
"Error message: Expression.Error: The column 'PartMtl_Company' of the table wasn't found.
Details: PartMtl_Company" where the PartMtl_Company is the leftmost column in my table.
I am kind of thinking that I would need to evaluate whether the second table is empty and skip the renaming steps if that is the case. I assume merging the populated first table with an empty table would cause no problem and would only result in the first table. I have tried to look around for a suitable M-code but have not come across such.
I'm thinking you might be able to use Table.RowCount to solve this. Something along the lines of:
= if Table.RowCount(Table2) > 0 then...
You would modify the headers only if there is data in the second table. Same goes for the appending of the tables: you would only append if there is data in the second table, since you won't have renamed any headers otherwise.
Thank you Marc! That did the trick.
In the end, I wrote some in the lines of
= if Table.RowCount(Table2) > 0 then... (code that works on a non-empty table) ...else Table2
, which returns the empty table if it is empty to begin with. Appending the second table into the first table did not throw an error but returned only the first table like planned.
I am new to SQL and POSTGRES and had a quick question. Right now I have 2 different tables one with car info and one with partial car info and I would like to sort on car.vin OR partial_car.vin depending if either exists and sending all nulls/empty strings to the end of the sort. Currently my ORDER BY statement looks like:
ORDER BY nullif(coalesce(car.vin, partial_car.partial_vin), '') asc nulls last limit 50 offset 0
My expectation for this is that coalesce will take the first non null value and use that for sorting or it will return null and send that to the end. My results so far I haven't been able to make sense of. There are null values being placed in between actual values etc.. If I make this change coalesce(car.vin, '') again I see it work properly. Anyone have an ideas as to why this is the behavior? Let me know if you need something more from me.
It was human error on my end. The object being sent to client was not being populated properly with partial data. So sorting was correct but was seeing blanks due to those values not being present.
I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).
I have some content in a file on which I must generate statistics such as how many of records are of type - 1, type - 2 etc. Number of types can change and is unknown to the code until file arrives. In a SQL system, I can do this using COUNT and GROUP BY clause. But I am not sure if I can do this using SYNCSORT or COBOL program. Would anyone here have an idea on how I can implement 'GROUP BY' type query on a file using SYNCSORT.
Sample Data:
TYPE001 SUBTYPE001 TYPE01-DESC
TYPE001 SUBTYPE002 TYPE01-DESC
TYPE001 SUBTYPE003 TYPE01-DESC
TYPE002 SUBTYPE001 TYPE02-DESC
TYPE002 SUBTYPE004 TYPE02-DESC
TYPE002 SUBTYPE008 TYPE02-DESC
I want to get the information such as TYPE001 ==> 3 Records, TYPE002 ==> 3 Records. What the code doesn't know until runtime is the TYPENNN value
You show data already in sequence, so there is no need to sort the data itself, which makes SUM FIELDS= with SORT a poor solution if anyone suggests it (plus code for the formatting).
MERGE with a single input file and SUM FIELDS= would be better, but still require the code for formatting.
The simplest way to produce output which may suit you is to use OUTFIL reporting functions:
OPTION COPY
OUTFIL NODETAIL,
REMOVECC,
SECTIONS=(1,7,
TRAILER3=(1,7,
' ==> ',
COUNT=(M10,LENGTH=3),
' Records'))
The NODETAIL says "remove all the data lines". The REMOVECC says "although it is a report, don't use printer-control characters on position one of the output records". The SECTIONS says "we're going to use control-breaks, and here they (it in this case) are". In this case, your control-field is 1,7. The TRAILER3 defines the output which will be produced at each control-break: COUNT here is the number of records in that particular break. M10 is an editing mask which will change leading zeros to blanks. The LENGTH gives a length to the output of COUNT, three is chosen from your sample data with sub-types being unique and having three digits as the unique part of the data. Change to whatever suits your actual data.
You've not been clear, and perhaps you want the output "floating" (3bb instead of bb3, where b represents a blank)? That would require more code...