How do I simplify this to 3 literals/letters?
= LM'+LN+N'B
How would you simplify this boolean expression? I don't know which boolean laws I need to use. I tried but I couldn't get it down to 3 literals only 4.
I have also not been able to reduce your expression to three literals.
The Karnaugh map:
BL
00 01 11 10
+---+---+---+---+
00 | 0 | 1 | 1 | 1 |
+---+---+---+---+
01 | 0 | 1 | 1 | 0 |
MN +---+---+---+---+
11 | 0 | 1 | 1 | 0 |
+---+---+---+---+
10 | 0 | 0 | 1 | 1 |
+---+---+---+---+
From looking at the map, you can see that three terms are needed to cover the nine minterms (depicted by "1") in the map. Each of the terms has two literals and covers four minterms. A term with just one literal would cover eight minterms.
Related
So double precision takes 64 bits in MATLAB. I know that 0 or 1 will take one bit.
But when I type realmax('double') I get a really big number 1.7977e+308. How can this number be saved in only 64 bits?
Would appreciate any clarafication. Thanks.
This is not a MATLAB question. A 64-bit IEEE 754 double-precision binary floating-point format is represented in this format:
bit layout:
| 0 | 1 | 2 | ... | 11 | 12 | 13 | 14 | ... | 63 |
| sign | exponent(E) (11 bit) | fraction (52 bit) |
The first bit is the sign:
0 => +
1 => -
The next 11 bits are used for the representation of the exponent. So we can have integers all the way to +2^10-1 = 1023. Wait... that does not sound good! To represent large numbers, the so-called biased form is used in which the value is represented as:
2^(E-1023)
where E is what the exponent represents. Say, The exponent bits are like these examples:
Bit representation of the exponent:
Bit no: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
Example 1: | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Example 2: | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Example 3: | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
Example 4: | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
Example 5: | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Base 10 representation:
Example 1 => E1: 1
Example 2 => E2: 32
Example 3 => E3: 515
Example 4 => E4: 2046
Example 5 => E4: Infinity or NaN (**** Special case ****)
Biased form:
Example 1 => 2^(E1-1023) = 2^-1022 <= The smallest possible exponent
Example 2 => 2^(E2-1023) = 2^-991
Example 3 => 2^(E3-1023) = 2^-508
Example 4 => 2^(E4-1023) = 2^+1023 <= The largest possible exponent
Example 5 => 2^(E5-1023) = Infinity or NaN
When E meets 0 < E < 2047 then the number is known as a normalized number represented by:
Number = (-1)^sign * 2^(E-1023) * (1.F)
but if E is 0, then the number if known as a denormalized number represented by:
Number = (-1)^sign * 2^(E-1022) * (0.F)
Now F is basically the what is determined by the fraction bits:
// Sum over i = 12, 13, ..... , 63
F = sum(Bit(i) * 2^(-i))
and Bit(i) refers the ith bit of the number. Examples:
Bit representation of the fraction:
Bit no: | 12 | 13 | 14 | 15 | ... ... ... ... | 62 | 63 |
Example 1: | 0 | 0 | 0 | 0 | 0 ... .... 0 | 0 | 1 |
Example 2: | 1 | 0 | 0 | 0 | 0 ... .... 0 | 0 | 0 |
Example 3: | 1 | 1 | 1 | 1 | 1 ... .... 1 | 1 | 1 |
F value assuming 0 < E < 2047:
Example 1 => 1.F1 = 1 + 2^-52
Example 2 => 1.F2 = 1 + 2^-1
Example 3 => 1.F3 = 1 + 1 - 2^-52
But when I type realmax('double') I get a really big number
1.7977e+308. How can this number be saved in only 64 bits?
realmax('double')'s binary representation is
| sign | exponent(E) (11 bit) | fraction (52 bit) |
0 11111111110 1111111111111111111111111111111111111111111111111111
Which is
+2^1023 x (1 + (1-2^-52)) = 1.79769313486232e+308
I took some definitions and examples from this Wikipedia page.
I am learning to work with Scala and spark. It's my first incidents using them. I have some structured Scala DataSet(org.apache.spark.sql.Dataset) like following format.
Region | Id | RecId | Widget | Views | Clicks | CTR
1 | 1 | 101 | A | 5 | 1 | 0.2
1 | 1 | 101 | B | 10 | 4 | 0.4
1 | 1 | 101 | C | 5 | 1 | 0.2
1 | 2 | 401 | A | 5 | 1 | 0.2
1 | 2 | 401 | D | 10 | 2 | 0.1
NOTE: CTR = Clicks/Views
I want to merge the mapping regardless of Widget (i.e using Region, Id, RecID).
The Expected Output I want is like following:
Region | Id | RecId | Views | Clicks | CTR
1 | 1 | 101 | 20 | 6 | 0.3
1 | 1 | 101 | 15 | 3 | 0.2
What I am getting is like below:
>>> ds.groupBy("Region","Id","RecId").sum().show()
Region | Id | RecId | sum(Views) | sum(Clicks) | sum(CTR)
1 | 1 | 101 | 20 | 6 | 0.8
1 | 1 | 101 | 15 | 3 | 0.3
I understand that it is summing up all the CTR from original but I want to groupBy as explained but still want to get the expected CTR value. I also don't want to change column names as it is changing in my approach.
Is there any possible way of calculating in such manner. I also have #Purchases and CoversionRate (#Purchases/Views) and I want to do the same thing with that field also. Any leads will be appreciated.
You can calculate the ctr after the aggregation. Try the below code.
ds.groupBy("Region","Id","RecId")
.agg(sum(col("Views")).as("Views"), sum(col("Clicks")).as("Clicks"))
.withColumn("CTR" , col("Views") / col("Clicks"))
.show()
This is to confirm if my design is good enough or get the better ideas to solve the bus routing problem with time. Here is my solution with the primary steps given below:
Have one edges table which represents all the edges (the source and target represent vertices (bus stops):
postgres=# select id, source, target, cost from busedges;
id | source | target | cost
----+--------+--------+------
1 | 1 | 2 | 1
2 | 2 | 3 | 1
3 | 3 | 4 | 1
4 | 4 | 5 | 1
5 | 1 | 7 | 1
6 | 7 | 8 | 1
7 | 1 | 6 | 1
8 | 6 | 8 | 1
9 | 9 | 10 | 1
10 | 10 | 11 | 1
11 | 11 | 12 | 1
12 | 12 | 13 | 1
13 | 9 | 15 | 1
14 | 15 | 16 | 1
15 | 9 | 14 | 1
16 | 14 | 16 | 1
Have a table which represents bus details like from time, to time, edge etc.
NOTE: I have used integer format for "from" and "to" column for faster results as I can do an integer query, but I can replace it with any better format if available.
postgres=# select id, "busedgeId", "busId", "from", "to" from busedgetimes;
id | busedgeId | busId | from | to
----+-----------+-------+-------+-------
18 | 1 | 1 | 33000 | 33300
19 | 2 | 1 | 33300 | 33600
20 | 3 | 2 | 33900 | 34200
21 | 4 | 2 | 34200 | 34800
22 | 1 | 3 | 36000 | 36300
23 | 2 | 3 | 36600 | 37200
24 | 3 | 4 | 38400 | 38700
25 | 4 | 4 | 38700 | 39540
Use dijkstra algorithm to find the nearest path.
Get the upcoming buses from the busedgetimes table in the earliest first order for the nearest path detected by dijkstra algorithm. => This leads to a bit complex query though.
Can I do any kind of improvements to this, or are there any better designs?
Links to docs, articles related to this would be really helpful.
This is totally normal and the regular way to do it. See also,
PgRouting Example
For a Karnaugh map of three or more variables deciding which side the variables go makes the solution easier to spot and simpler. But how do you know which side which variables go on.
eg. For variables x, y and z; You could have x and y as column headers and z as a row header or you could have y and z as column headers and x as a row header which would give two different tables
For maps with up to four variables, it is a matter of taste, which variable is put at which side. However, Mahoney maps as extension of Karnaugh maps for five and more variables do require a certain ordering along the side.
Expression for the following examples:
abcd!e + abc!de
Five-input Mahoney map:
Equivalent Karnaugh map:
de de
00 01 11 10 00 01 11 10
abc +---+---+---+---+ abc +---+---+---+---+
000 | 0 | 0 | 0 | 0 | 001 | 0 | 0 | 0 | 0 |
+---+---+---+---+ +---+---+---+---+
010 | 0 | 0 | 0 | 0 | 011 | 0 | 0 | 0 | 0 |
+---+---+---+---+ +---+---+---+---+
110 | 0 | 0 | 0 | 0 | 111 | 0 | 1 | 0 | 1 |
+---+---+---+---+ +---+---+---+---+
100 | 0 | 0 | 0 | 0 | 101 | 0 | 0 | 0 | 0 |
+---+---+---+---+ +---+---+---+---+
It is always possible to swap variables as shown here:
de de
00 01 11 10 00 01 11 10
abc +---+---+---+---+ abc +---+---+---+---+
000 | 0 | 0 | 0 | 0 | 001 | 0 | 0 | 0 | 0 |
+---+---+---+---+ +---+---+---+---+
010 | 0 | 0 | 0 | 0 | 011 | 0 | 0 | 0 | 0 |
+---+---+---+---+ +---+---+---+---+
110 | 0 | 0 | 0 | 0 | 111 | 0 | 1 | 0 | 1 |
+---+---+---+---+ +---+---+---+---+
100 | 0 | 0 | 0 | 0 | 101 | 0 | 0 | 0 | 0 |
+---+---+---+---+ +---+---+---+---+
Here you can find a nice online-tool to draw and simplify Karnaugh-Veitch/Mahoney maps.
For a given range, for instance
val range = (1 to 5).toArray
val ready = Array(2,4)
the missing values (not ready) are
val missing = range.toSet diff ready.toSet
Set(5, 1, 3)
The real use case includes thousands of range instances with (possibly) thousands of missing or not ready values. Is there a more time-efficient approach in Scala?
The diff operation is implemented in Scala as a foldLeft over the left operand where each element of the right operand is removed from the left collection. Let's assume that the left and right operand have m and n elements, respectively.
Calling toSet on an Array or Range object will return a HashTrieSet, which is a HashSet implementation and, thus, offers a remove operation with complexity of almost O(1). Thus, the overall complexity for the diff operation is O(m).
Considering now a different approach, we'll see that this is actually quite good. One could also solve the problem by sorting both ranges and then traversing them once in a merge-sort fashion to eliminate all elements which occur in both ranges. This will give you a complexity of O(max(m, n) * log(max(m, n))), because you have to sort both ranges.
Update
I ran some experiments to investigate whether you can speed up your computation by using mutable hash sets instead of immutable. The result as shown in the tables below is that it depends on the size ration of range and ready.
It seems as if using immutable hash sets is more efficient if the ready.size/range.size < 0.2. Above this ratio, the mutable hash sets outperform the immutable hash sets.
For my experiments, I set range = (1 to n), with n being the number of elements in range. For ready I selected a random sub set of range with the respective number of elements. I repeated each run 20 times and summed up the times calculated with System.currentTimeMillis().
range.size == 100000
+-----------+-----------+---------+
| Fraction | Immutable | Mutable |
+-----------+-----------+---------+
| 0.01 | 28 | 111 |
| 0.02 | 23 | 124 |
| 0.05 | 39 | 115 |
| 0.1 | 113 | 129 |
| 0.2 | 174 | 140 |
| 0.5 | 472 | 200 |
| 0.75 | 722 | 203 |
| 0.9 | 786 | 202 |
| 1.0 | 743 | 212 |
+-----------+-----------+---------+
range.size == 500000
+-----------+-----------+---------+
| Fraction | Immutable | Mutable |
+-----------+-----------+---------+
| 0.01 | 73 | 717 |
| 0.02 | 140 | 771 |
| 0.05 | 328 | 722 |
| 0.1 | 538 | 706 |
| 0.2 | 1053 | 836 |
| 0.5 | 2543 | 1149 |
| 0.75 | 3539 | 1260 |
| 0.9 | 4171 | 1305 |
| 1.0 | 4403 | 1522 |
+-----------+-----------+---------+