I am trying to clean up some local (draft) mercurial history before I push to our master repository. One step that I'd like to take is to squash / roll together a "Fixup" changeset into its parent. My history looks like this:
o [draft] 14 X
|
| o [draft] 13 Y
|/
o [draft] 12 Fixup
|
o [draft] 11 Thing to Fix
|
o [draft] 10 Z
|
~
When done, it should look like this:
o [draft] 13 X
|
| o [draft] 12 Y
|/
o [draft] 11 Thing to Fix with Fixup
|
o [draft] 10 Z
|
~
Obviously rolling two commits together shouldn't create any conflicts, since the code of the descendants is untouched. However, I can't find a way to get histedit to allow me to edit that far back in my history, since it needs to edit "A changeset together with all its descendants."
Is this edit possible? How should it be done?
It's certainly possible. I don't know how to do it easily, though.
The trick is to turn this:
o [draft] 14 X
|
| o [draft] 13 Y
|/
o [draft] 12 Fixup
|
o [draft] 11 Thing to Fix
|
o [draft] 10 Z
|
~
into:
o [draft] 16 Fixup
|
o [draft] 15 Thing to Fix
|
| o [draft] 14 X
| |
| | o [draft] 13 Y
| |/
| o [draft] 12 Fixup
| |
| o [draft] 11 Thing to Fix
|/
o [draft] 10 Z
|
~
where 15 is a copy of 11 and 16 is a copy of 12. (Use hg graft to do this.) You can now use hg histedit to combine these two.
It's now easy to copy 13 and 14 atop the new 15 that has replaced the old 15-and-16:
o [draft] 17 X
|
| o [draft] 16 Y
|/
o [draft] 15 Thing to Fix with Fixup
|
| o [draft] 14 X
| |
| | o [draft] 13 Y
| |/
| o [draft] 12 Fixup
| |
| o [draft] 11 Thing to Fix
|/
o [draft] 10 Z
|
~
Now you can hg strip -r 11 to remove 11, 12, 13, and 14, leaving only the corrected commits. (Note that you can use rebase to simplify this a bit. The long form is mainly for illustration and ease of comprehension.)
Related
MY Structure looks like
Y | M | P
2018 | 8 | A
2018 | 8 | A
2018 | 9 | A
2018 | 9 | B
I am seeking to achieve
Y | M | A | B
2018 | 8 | 2 | 0
2018 | 9 | 1 | 1
You can use conditional aggregation for that:
select y,m,
count(*) filter (where p = 'A') as a,
count(*) filter (where p = 'B') as b
from the_table
group by y,m
order by y,m
I'm trying to find a way to mark duplicated cases similar to this question.
However, instead of counting occurrences of duplicated values, I'd like to mark them as 0 and 1, for duplicated and unique cases respectively. This is very similar to SPSS's identify duplicate cases function. For example if I have a dataset like:
Name State Gender
John TX M
Katniss DC F
Noah CA M
Katniss CA F
John SD M
Ariel FL F
And if I wanted to flag those with duplicated name, so the output would be something like this:
Name State Gender Dup
John TX M 1
Katniss DC F 1
Noah CA M 1
Katniss CA F 0
John SD M 0
Ariel FL F 1
A bonus would be a query statement that will handle which case to pick when determining the unique case.
SELECT name, state, gender
, NOT EXISTS (SELECT 1 FROM names nx
WHERE nx.name = na.name
AND nx.gender = na.gender
AND nx.ctid < na.ctid) AS Is_not_a_dup
FROM names na
;
Explanation: [NOT] EXISTS(...) results in a boolean value (which could be converted to an integer) Casting to boolean requires an extra pair of () , though:
SELECT name, state, gender
, (NOT EXISTS (SELECT 1 FROM names nx
WHERE nx.name = na.name
AND nx.gender = na.gender
AND nx.ctid < na.ctid))::integer AS is_not_a_dup
FROM names na
;
Results:
DROP SCHEMA
CREATE SCHEMA
SET
CREATE TABLE
INSERT 0 6
name | state | gender | nodup
---------+-------+--------+-------
John | TX | M | t
Katniss | DC | F | t
Noah | CA | M | t
Katniss | CA | F | f
John | SD | M | f
Ariel | FL | F | t
(6 rows)
name | state | gender | nodup
---------+-------+--------+-------
John | TX | M | 1
Katniss | DC | F | 1
Noah | CA | M | 1
Katniss | CA | F | 0
John | SD | M | 0
Ariel | FL | F | 1
(6 rows)
I'm trying to join multiple tables in q
a b c
key | valuea key | valueb key | valuec
1 | xa 1 | xb 2 | xc
2 | ya 2 | yb 4 | wc
3 | za
The expected result is
key | valuea | valueb | valuec
1 | xa | xb |
2 | ya | yb | xc
3 | za | |
4 | | | wc
The can be acheieved simply with
(a uj b) uj c
BUT does anyone know how i can do it in functional form?
I don't know how many tables i actually have
I need basically a function that will go over the list and smash any number of keyed tables together...
f:{[x] x uj priorx};
f[] each (a;b;c;d;e...)
Can anyone help? or suggest anything?
Thanks!
Another solution particular to your problem which is also little faster than your solution:
a (,')/(b;c)
figured it out... ;)
f:{[r;t]r uj t};
f/[();(a;b;c)]
I have a simple crosstab such as this:
Trans | Pants | Shirts |
| 2013 | 2014 | 2013 | 2014 |
---------------------------------------
Jan | 33 | 37 | 41 | 53 |
Feb | 31 | 33 | 38 | 43 |
Mar | 26 | 29 | 51 | 56 |
Pants and Shirt belong to the data item: Category
Years belong to the data item: Years
Months belong to the data item: Months
Trans (transactions) belongs to the data item: Trans
Here is what is looks like in report studio:
Trans | <#Category#> | <#Category#> |
| <#Years#> | <#Years#> | <#Years#> | <#Years#> |
-----------------------------------------------------------
<#Months#>| <#1234#> | <#1234#> | <#1234#> | <#1234#> |
I want to be able to calculate the variance of pants and shirts between the years. To get something like this:
Trans | Pants | Shirts |
| 2013 | 2014 | YOY Variance | 2013 | 2014 | YOY Variance |
---------------------------------------------------------------------
Jan | 33 | 37 | 12.12 | 41 | 53 | 29.27 |
Feb | 31 | 33 | 6.45 | 38 | 43 | 13.16 |
Mar | 26 | 29 | 11.54 | 51 | 56 | 9.80 |
I've tried inserting a data item for YOY Variance with the expression below just to see if I can even get the 2014 value but cannot, some odd reason it only returns the 2013 values:
Total([Trans] for maximum[Year],[Category],[Months])
Any ideas? Help?
(I'm assuming you don't have a DMR.)
There is no easy/clean way to do this in Cognos. In your query, you'll have to build a calculation for each year in your output. So, something like this for 2013:
total (if [Years] = 2013) then ([Trans]) else (0))
And basically the same for 2014.
Cut the Trans piece out of your crossab. Then you'll nest those two calcs under your years. To get rid of all the zeroes or nulls, select the two columns. From the menu, select Data,
Suppress,Suppress Columns Only.
Finally, you will drop a calc in next to your Years in the crosstab (not under them). The expression will be ([2014 trans] - [2013 trans])/[2014 trans] (or whatever you end up naming your calcs). Format it as a percent, and you should be good to go.
Told you it was a pain!
I need to join two tables based on names. And the problem is that names may be a slight mispelling in one of the database. I have remedy this problem in the past using Stata and Python's fuzzy merging, where names are matched based on how closely similar they are, but I am wondering if this is possible to do in Postgresql.
For example, may data may be something similar to this:
Table A:
first_name_a | last_name_a | id_a
----------------------------------
William | Hartnell | 1
Matt | Smithe | 2
Paul | McGann | 3
David | Tennant | 4
Colin | Baker | 5
Table B:
first_name_b | last_name_b | id_b
----------------------------------
Matt | Smith | a
Peter | Davison | b
Dave | Tennant | c
Colin | Baker | d
Will | Hartnel | e
And in the end, I hope my results would look something like:
first_name_a | last_name_a | id_a | first_name_b | last_name_b | id_b
----------------------------------------------------------------------
William | Hartnell | 1 | Will | Hartnel | e
Matt | Smithe | 2 | Matt | Smith | a
Paul | McGann | 3 | | |
David | Tennant | 4 | Dave | Tennant | c
Colin | Baker | 5 | Colin | Baker | d
| | | Peter | Davison | b
My Sonic Screwdriver gives me some pseudo-code like this:
SELECT a.*, b.* FROM A a
JOIN B b
WHERE LEVENSHTEIN(first_name_a, first_name_b) IS LESS THAN 1
AND LEVENSHTEIN(last_name_a, last_name_b) IS LESS THAN 1
The DML you mention:
SELECT a.*, b.* FROM A a
JOIN B b
WHERE LEVENSHTEIN(first_name_a, first_name_b) IS LESS THAN 1
AND LEVENSHTEIN(last_name_a, last_name_b) IS LESS THAN 1
Looks correct, just bump up the 'fuzziness' (given 'IS LESS THAN 1' substitute 1 for the 'fuzzyness' level that you you require)
See http://www.postgresql.org/docs/9.1/static/fuzzystrmatch.html for reference info on LEVENSHTEIN.
Done up as an SQLFiddle. Play with the thresholds/look at some of the other mapping functions mentioned in matching fuzzy strings.