I'm having trouble to determine the boolean equation for Q1 and Q2. What I did was to input the values into a karnaugh-map. But since the state Diagram only consists of 3 states (00, 01 and 11), I'm a bit unsure of how to setup the Karnaugh. I know what it would have looked like if it had four states like (00, 01, 11 and 10).
This is what my karnaugh looks like, it's probably wrong though
Edit: Should I add the last row (10) in my Karnaugh and just input don't care?
I would say, that the K-map is ok as a draft, but I would suggest making each of the output variables (the "new" Q_1 and Q_0 in the next step of the state diagram) their own K-map.
That way you can minimize the function separately for each of them.
I have filled the truth table this way:
+-----------------++-----------+
input variables || next state
+-----+-----+-----++-----+-----+
| Q_1 | Q_0 | x || Y_1 | Y_0 |
+-----+-----+-----++-----+-----+
| 0 | 0 | 0 || 0 | 1 |
| 0 | 0 | 1 || 0 | 0 |
| 0 | 1 | 0 || 0 | 0 |
| 0 | 1 | 1 || 1 | 1 |
+-----+-----+-----++-----+-----+
| 1 | 0 | 0 || X | X |
| 1 | 0 | 1 || X | X |
| 1 | 1 | 0 || 0 | 0 |
| 1 | 1 | 1 || 1 | 1 |
+-----+-----+-----++-----+-----+
And the output functions determining the next state (Y_1 as the "new" next Q_1, Y_0 as the "new" next Q_0) are:
The indexes in the Karnaugh maps correspond with the rows of the truth table because of the order of the variables.
Also take notice, that I used the 'dont-care' X output (for 10 state) to advantage in minimization of the second function (Q_0).
The machine should (theoretically) never go to the 'dont-care' state, therefore you should not worry about using it in the function.
Without circling the X the Y_0 function would be longer: Y_0 = ¬x·¬Q_1·¬Q_0 + x·Q_0. With the X it is only: Y_0 = ¬x·¬Q_0 + x·Q_0.
If it seem unclear to you, do not hesitate to ask in a comment, please.
Related
I am trying to find cases where one type of error causes multiple sequential instances of a second type of error on a vehicle. For example, if there are two vehicles, 'a' and 'b', and vehicle a has an error of type 1 ('error_1') on day 0, it can cause errors of type 2 ('error_2') on days 1, 2, 3, and 4. I want to create a variable named cascading_error that shows every consecutive error_2 following an error_1. Note that in the case of vehicle b, it is possible to have an error_2 without a preceding error_1, in which case the value for cascading_error should be 0.
Here's what I've tried:
vals = [('a',0,1,0),('a',1,0,1),('a',2,0,1),('a',3,0,1),('b',0,0,0),('b',1,0,0),('b',2,0,1), ('b',3,0,1)]
df = spark.createDataFrame(vals, ['vehicle','day','error_1','error_2'])
w = Window.partitionBy('vehicle').orderBy('day')
df = df.withColumn('cascading_error', F.lag(df.error_1).over(w) * df.error_2)
df = df.withColumn('cascading_error', F.when((F.lag(df.cascading_error).over(w)==1) & (df.error_2==1), F.lit(1)).otherwise(df.cascading_error))
df.show()
This is my result
| vehicle | day | error_1 | error_2 | cascading_error |
| ------- | --- | ------- | ------- | --------------- |
| a | 0 | 1 | 0 | null |
| a | 1 | 0 | 1 | 1 |
| a | 2 | 0 | 1 | 1 |
| a | 3 | 0 | 1 | 0 |
| a | 4 | 0 | 1 | 0 |
| b | 0 | 0 | 0 | null |
| b | 1 | 0 | 0 | 0 |
| b | 2 | 0 | 1 | 0 |
| b | 3 | 0 | 1 | 0 |
The code is generating the correct cascading_error value on days 1 and 2 for vehicle a, but not on days 3 and 4, which should also be 1. It seems that the logic of combining cascading_error with error_2 to update cascading_error only works for a single row, not sequential ones.
I have a table like the following:
city | center | qty_out | qty_out %
----------------------------------------
A | 1 | 10 | .286
A | 2 | 2 | .057
A | 3 | 23 | .657
B | 1 | 40 | .8
B | 2 | 10 | .2
city-center is unique/the primary key.
If any center within a city has a qty_out % of less than 10% (.10), I want to ignore it and redistribute its % among the other centers of the city. So the result above would become
city | center | qty_out_%
----------------------------------------
A | 1 | .3145
A | 3 | .6855
B | 1 | .8
B | 2 | .2
How can I go about this? I was thinking a window function to partition but can't think of a window function to use with this
column_list = ["city","center"]
w = Window.partitionBy([col(x) for x in column_list]).orderBy('qty_out_%')
I am not statistician, so I cannot comment on the equation, however, if I write the Spark SQL as literally as you mentioned, it'll be like this.
w = Window.partitionBy('city')
redist_cond = F.when(F.col('qty_out %') < 0.1, F.col('qty_out %'))
df = (df.withColumn('redist', F.sum(redist_cond).over(w) / (F.count('*').over(w) - F.count(redist_cond).over(w)))
.fillna(0, subset=['redist'])
.filter(F.col('qty_out %') >= 0.1)
.withColumn('qty_out %', redist_cond.otherwise(F.col('qty_out %') + F.col('redist')))
.drop('redist'))
First of all I just want to state that I'm very new to GIS and that I'm probably not that great at the terminology yet, so bear with me.
I'm having my internship right now and have been tasked with making a bike commuting potential analysis. The data I'm using is road layer (which I have already created a topology for using pgr_createTopology) and two point layers for where individuals live and work created from the centroids of 500x500m squares.
I have managed to do some sort of calculation between my two point layers using pgr_dijkstraCost that looks like this:
SELECT *
FROM pgr_dijkstraCost(
'SELECT gid AS id,
source,
target,
extlen / 1.3 / 60 AS cost
FROM roads',
array(select source FROM living),
array(select target FROM work),
directed := false);
The source and target value in the living and work test tables has a value from 1 to 50 since I initially though that I could make the calculation by calculating when source and target has the same value. I now know that's not possible since pgr_dijkstra wont allow calculations when they are the same. The result I'm getting right now is for every combination I don't want. The final calculation will be for around 300 000 pairs.
So is there a way for me to only do the calculation on specified pairs and not for every possible combination?
Starting from Version 3.1 there is this signature
pgr_dijkstra(Edges SQL, Combinations SQL, end_vids, [, directed])
RETURNS SET OF (seq, path_seq, start_vid, end_vid, node, edge, cost, agg_cost)
OR EMPTY SET
example usage (taken from the pgRouting documentation)
CREATE TABLE combinations_table (
source BIGINT,
target BIGINT
);
INSERT INTO combinations_table (source, target)
VALUES (1, 2), (1, 4), (2, 1), (2, 4), (2, 17);
SELECT * FROM pgr_dijkstra(
'SELECT id, source, target, cost, reverse_cost FROM edge_table',
'SELECT * FROM combinations_table',
FALSE
);
seq | path_seq | start_vid | end_vid | node | edge | cost | agg_cost
----+----------+-----------+---------+------+------+------+----------
1 | 1 | 1 | 2 | 1 | 1 | 1 | 0
2 | 2 | 1 | 2 | 2 | -1 | 0 | 1
3 | 1 | 1 | 4 | 1 | 1 | 1 | 0
4 | 2 | 1 | 4 | 2 | 2 | 1 | 1
5 | 3 | 1 | 4 | 3 | 3 | 1 | 2
6 | 4 | 1 | 4 | 4 | -1 | 0 | 3
7 | 1 | 2 | 1 | 2 | 1 | 1 | 0
8 | 2 | 2 | 1 | 1 | -1 | 0 | 1
9 | 1 | 2 | 4 | 2 | 2 | 1 | 0
10 | 2 | 2 | 4 | 3 | 3 | 1 | 1
11 | 3 | 2 | 4 | 4 | -1 | 0 | 2
(11 rows)
I'm struggling with the following problem in Matlab:
I’ve got a table containing a few column vectors: Day, Name, Result
My goal is to create another column vector (New vector) that shows me in each row the result of the previous day for the corresponding name.
| Day | Name | Result | New Vector |
|-----|------|--------|------------|
| 1 | A | 1.2 | 0 |
| 1 | C | 0.9 | 0 |
| 1 | B | 0.7 | 0 |
| 1 | D | 1.1 | 0 |
| 2 | B | 1 | 0.7 |
| 2 | A | 1.5 | 1.2 |
| 2 | C | 1.4 | 0.9 |
| 2 | D | 0.9 | 1.1 |
| 3 | B | 1.1 | 1 |
| 3 | C | 1.3 | 1.4 |
| 3 | A | 1 | 1.5 |
| 3 | D | 0.3 | 0.9 |
For example row 5:
It is day 2 and name is "B". The vector "RESULT" shows 1.0 in the same row but what I want to show in my new vector, is the result value of "B" of the previous day (day 1 in this example).
Since one can find "B" on the previous day in row 3, the result value is 0.7, which should be shown in row 5 of my New Vector.
When day is equal to 1, the logical consequence is that there are no values since there is no previous day. Consequently I want to show 0 for each row on Day 1.
I've already tried some combinations of unique to get the index and some if clauses but it did not work at all since I'm relatively new to Matlab and still very confused.
Is anybody able to help? Thank you so much!!
Your question is not well defined, but the code below solves your problem as it is stated.
This code works by internally sorting each Day's information in order of Name. This allows New Vector to be created easily by simply shifting and then inverting the sort operation.
close all; clear all; clc;
% A few column vectors
Day = [1;1;1;1;2;2;2;2;3;3;3;3];
Name = ['A';'C';'B';'D';'B';'A';'C';'D';'B';'C';'A';'D'];
Result = [1.2;0.9;0.7;1.1;1;1.5;1.4;0.9;1.1;1.3;1;0.3];
% Sort the table (so Name is in order for each Day)
[~,Index] = sort(max(Name)*Day + Name);
Day = Day(Index);
Name = Name(Index);
Result = Result(Index);
% Shift Result to get sorted NewVector
NewVector = circshift(Result, 4);
NewVector(1:4) = 0;
% Unsort NewVector, to get original table ordering
ReverseIndex(Index) = 1:length(Index);
NewVector = NewVector(ReverseIndex)
This prints the following result:
NewVector =
0
0
0
0
0.7000
1.2000
0.9000
1.1000
1.0000
1.4000
1.5000
0.9000
I have two tables "matches" and "opponents".
Matches
id | date
---+------------
1 | 2016-03-21 21:00:00
2 | 2016-03-22 09:00:00
...
Opponents
(score is null if not played)
id | match_id | team_id | score
---+----------+---------+------------
1 | 1 | 1 | 0
2 | 1 | 2 | 1
3 | 2 | 3 | 1
4 | 2 | 4 | 1
4 | 3 | 1 |
4 | 3 | 2 |
....
The goal is to create the following table
Team | won | tie | lost | total
-----+-----+-----+------+----------
2 | 1 | 0 | 0 | 1
3 | 0 | 1 | 0 | 1
4 | 0 | 1 | 0 | 1
1 | 0 | 0 | 1 | 1
Postgres v9.5
How do I do this? (Im open to maybe moving the "score" to somewhere else in my model if it makes sense.)
Divide et impera my son
with teams as (
select distinct team_id from opponents
),
teamgames as (
select t.team_id, o.match_id, o.score as team_score, oo.score as opponent_score
from teams t
join opponents o on t.team_id = o.team_id
join opponents oo on (oo.match_id = o.match_id and oo.id != o.id)
),
rankgames as (
select
team_id,
case
when team_score > opponent_score then 1
else 0
end as win,
case
when team_score = opponent_score then 1
else 0
end as tie,
case
when team_score < opponent_score then 1
else 0
end as loss
from teamgames
),
rank as (
select
team_id, sum(win) as win, sum(tie) as tie, sum(loss) as loss,
sum( win * 3 + tie * 1 ) as score
from rankgames
group by team_id
order by score desc
)
select * from rank;
Note1: You probably don't need the first "with" as you probably have already a table with one record per team
Note2: i think you can also achieve the same result with one single query, but in this way steps are clearer