I was reading up on perlvar when i came across this -
The status returned by the last pipe close, backtick (`` ) command, successful call to wait() or waitpid(), or from the system() operator. This is just the 16-bit status word returned by the traditional Unix wait() system call (or else is made up to look like it). Thus, the exit value of the subprocess is really ($?>> 8 ), and $? & 127 gives which signal
What is a 16-bit status word? what does the operation '$?>> 8' signify? and how does a 16-bit word like '512' get converted to '2' after i do '$?>> 8' on it?
A 16-bit word is merely an amount of memory 16-bits in size. The word "word" implies the CPU can read it from memory with one instruction. (e.g. I worked on a machine that had 64K bytes of memory, but the CPU could only access it as 32K 16-bit words.)
Interpreted as an unsigned integer, an 16-bit word would look like a number between 0 and 216-1 = 65,535, but it's not necessarily an unsigned integer. In the case of $?, it's used to stored three unsigned integers.
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
| 15| 14| 13| 12| 11| 10| 9| 8| 7| 6| 5| 4| 3| 2| 1| 0|
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
\-----------------------------/ \-/ \-------------------------/
Exit code core Signal that killed
(0..255) dumped (0..127)
(0..1)
If the OS wants to return "Exited with error code 2", it sets $? to (2 << 8) | (0 << 7) | (0 << 0).
+---+---+---+---+---+---+---+---+
| 2 | << 8
+---+---+---+---+---+---+---+---+
+---+
| 0 | << 7
+---+
+---+---+---+---+---+---+---+
| 0 | << 0
+---+---+---+---+---+---+---+
=================================================================
+---+---+---+---+---+---+---+---+
| 2 |
+---+---+---+---+---+---+---+---+
+---+
| 0 |
+---+
+---+---+---+---+---+---+---+
| 0 |
+---+---+---+---+---+---+---+
=================================================================
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
| 2 | 0 | 0 |
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
If the OS wants to return "killed by signal 5; core dumped", it sets $? to (0 << 8) | (1 << 7) | (5 << 0).
+---+---+---+---+---+---+---+---+
| 0 | << 8
+---+---+---+---+---+---+---+---+
+---+
| 1 | << 7
+---+
+---+---+---+---+---+---+---+
| 5 | << 0
+---+---+---+---+---+---+---+
=================================================================
+---+---+---+---+---+---+---+---+
| 0 |
+---+---+---+---+---+---+---+---+
+---+
| 1 |
+---+
+---+---+---+---+---+---+---+
| 5 |
+---+---+---+---+---+---+---+
=================================================================
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
| 0 | 1 | 5 |
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
$? >> 8 is simply doing the reverse operation.
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
| 2 | 0 | 0 |
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
>> 8
=================================================================
+---+---+---+---+---+---+---+---+
| 2 |
+---+---+---+---+---+---+---+---+
It returns the number stored in bits 8 and up.
A 16-bit value is a value that can be stored in sixteen binary bits. In hex that is 0 through to FFFF, or 65,535 in decimal.
A 16-bit status word is the value supplied by the Unix wait call that combines the process's exit status in the left-hand (most-significant) eight bits, a single bit indicating whether a core dump of the terminated process was produced, and the number of the signal, if any, that caused it to terminate in the right-hand (least-significant) seven bits.
Conventionally a zero value for the exit status indicates that the process has been successful, while a non-zero value indicates some sort of failure or informational state.
$? >> 8 indicates shifting the value to the right by eight bits, losing the right-hand (least-significant) eight bits (i.e. the core dump bit and signal number) and leaving the left-hand eight bits (the exit status). This is equivalent to dividing by 28 or 256.
Since $? >> 8 is equivalent to dividing $? by 28 or 256, if $? is 512 then $? >> 8 is 512 / 256, giving an exit status of 2.
Related
There is a lot of different Double initializers, but any should except String as a parameter.
Why this compile (and returns a value!) ???
if let d = Double("0xabcdef10.abcdef10") {
print(d)
}
it prints
2882400016.671111
no import required, please guys, check it with Your environment ...
UPDATE
Thank You guys, My trouble is not to uderstand how to represent Double value as a hexadecimal string.
I am confused with inconsistent implemetation of
protocol LosslessStringCovertible
init?(_:) REQUIRED
Instantiates an instance of the conforming type from a string representation.
Declaration
init?(_ description: String)
Both of Double and Int conforms to LosslessStringCovertible (Int indirectly, via conformance to FixedWidthInteger)
At the begining I started with
public func getValue<T: LosslessStringConvertible>(_ value: String)->T {
guard let ret = T.init(value) else {
// should never happen
fatalError("failed to assign to \(T.self)")
}
return ret
}
// standart notation
let id: Int = getValue("15")
// hexadecimal notation
let ix: Int = getValue("0Xf") // Fatal error: failed to assign to Int
OK, that is implementation detail, so I decided to implemet it by my own, which accept string with binary, oktal, hexadecimal notation
next I did the same for Double and by testing it I found that when I forgot to import my LosslessStringConvertibleExt, my tests passed for expected Double where the string was in hexadecimal notation and in decimal notation.
Thank You LeoDabus for Your comment with the link to docs, which I didn't find before (yes, most likely I am blinded, it saves me few hours :-)
I appologize the rest of You for the "stupid" question.
From the documentation of Double's failable init:
A hexadecimal value contains the significand, either 0X or 0x, followed by a sequence of hexadecimal digits. The significand may include a decimal point.
let f = Double("0x1c.6") // f == 28.375
So 0xabcdef10.abcdef10 is interpreted as an hexadecimal number, given the 0x prefix.
The string was interpreted as fractional hex. Here's how the decimal value is calculated:
| Hex Digit | Decimal Value | Base Multipler | Decimal Result |
|-----------|---------------|----------------|---------------------------|
| a | 10 | x 16 ^ 7 | 2,684,354,560.0000000000 |
| b | 11 | x 16 ^ 6 | 184,549,376.0000000000 |
| c | 12 | x 16 ^ 5 | 12,582,912.0000000000 |
| d | 13 | x 16 ^ 4 | 851,968.0000000000 |
| e | 14 | x 16 ^ 3 | 57,344.0000000000 |
| f | 15 | x 16 ^ 2 | 3,840.0000000000 |
| 1 | 1 | x 16 ^ 1 | 16.0000000000 |
| 0 | 0 | x 16 ^ 0 | 0.0000000000 |
| . | | | |
| a | 10 | x 16 ^ -1 | 0.6250000000 |
| b | 11 | x 16 ^ -2 | 0.0429687500 |
| c | 12 | x 16 ^ -3 | 0.0029296875 |
| d | 13 | x 16 ^ -4 | 0.0001983643 |
| e | 14 | x 16 ^ -5 | 0.0000133514 |
| f | 15 | x 16 ^ -6 | 0.0000008941 |
| 1 | 1 | x 16 ^ -7 | 0.0000000037 |
| 0 | 0 | x 16 ^ -8 | 0.0000000000 |
--------------------------------------------------------------------------
| Total | | | 2,882,400,016.6711100000 |
I have, in a pyspark dataframe, a column with values 1, -1 and 0, representing the events "startup", "shutdown" and "other" of an engine. I want to build a column with the state of the engine, with 1 when the engine is on and 0 when it's off, something like:
+---+-----+-----+
|seq|event|state|
+---+-----+-----+
| 1 | 1 | 1 |
| 2 | 0 | 1 |
| 3 | -1 | 0 |
| 4 | 0 | 0 |
| 5 | 0 | 0 |
| 6 | 1 | 1 |
| 7 | -1 | 0 |
+---+-----+-----+
If 1s and -1s are always alternated, this can easily be done with a window function such as
sum('event').over(Window.orderBy('seq'))
It can happen, though, that I have some spurious 1s or -1s, which I want to ignore if I am already in state 1 or 0, respectively. I want thus to be able to do something like:
+---+-----+-----+
|seq|event|state|
+---+-----+-----+
| 1 | 1 | 1 |
| 2 | 0 | 1 |
| 3 | 1 | 1 |
| 4 | -1 | 0 |
| 5 | 0 | 0 |
| 6 | -1 | 0 |
| 7 | 1 | 1 |
+---+-----+-----+
That would require a "saturated" sum function that never goes above 1 or below 0, or some other approach which I am not able to imagine at the moment.
Does somebody have any ideas?
You can achieve the desired result using the last function to fill-forwards the most recent state-change.
from pyspark.sql import functions as F
from pyspark.sql.window import Window
df = (spark.createDataFrame([
(1, 1),
(2, 0),
(3, -1),
(4, 0)
], ["seq", "event"]))
w = Window.orderBy('seq')
#replace zeros with nulls so they can be ignored easily.
df = df.withColumn('helperCol',F.when(df.event != 0,df.event))
#fill statechanges forward in a new column.
df = df.withColumn('state',F.last(df.helperCol,ignorenulls=True).over(w))
#replace -1 values with 0
df = df.replace(-1,0,['state'])
df.show()
This produces:
+---+-----+---------+-----+
|seq|event|helperCol|state|
+---+-----+---------+-----+
| 1| 1| 1| 1|
| 2| 0| null| 1|
| 3| -1| -1| 0|
| 4| 0| null| 0|
+---+-----+---------+-----+
helperCol need not be added to the dataframe, I have only included it to make the process more readable.
How do I simplify this to 3 literals/letters?
= LM'+LN+N'B
How would you simplify this boolean expression? I don't know which boolean laws I need to use. I tried but I couldn't get it down to 3 literals only 4.
I have also not been able to reduce your expression to three literals.
The Karnaugh map:
BL
00 01 11 10
+---+---+---+---+
00 | 0 | 1 | 1 | 1 |
+---+---+---+---+
01 | 0 | 1 | 1 | 0 |
MN +---+---+---+---+
11 | 0 | 1 | 1 | 0 |
+---+---+---+---+
10 | 0 | 0 | 1 | 1 |
+---+---+---+---+
From looking at the map, you can see that three terms are needed to cover the nine minterms (depicted by "1") in the map. Each of the terms has two literals and covers four minterms. A term with just one literal would cover eight minterms.
I've done some benchmarks and have results that I don't know how to explain.
The situation in a nutshell:
I have 2 classes doing the same (computation heavy) thing with generic arrays, both of them use specialization (#specialized, later #spec). One class is defined like:
class A [#spec T] {
def a(d: Array[T], c: Whatever[T], ...) = ...
...
}
Second: (singleton)
object B {
def a[#spec T](d: Array[T], c: Whatever[T], ...) = ...
...
}
And in the second case, I get huge performance hit. Why this can happen? (Note: at the moment I don't understand Java bytecode very well, and Scala compiler internals too.)
More details:
Full code is here: https://github.com/magicgoose/trashbox/tree/master/sorting_tests/src/magicgoose/sorting
This is sorting algorithm ripped from Java, (almost)automagically converted to Scala and comparison operations changed to generic ones to allow using custom comparisons with primitive types without boxing. Plus simple benchmark (tests on different lengths, with JVM warmup and averaging)
The results looks like: (left column is original Java Arrays.sort(int[]))
JavaSort | JavaSortGen$mcI$sp | JavaSortGenSingleton$mcI$sp
length 2 | time 0.00003ms | length 2 | time 0.00004ms | length 2 | time 0.00006ms
length 3 | time 0.00003ms | length 3 | time 0.00005ms | length 3 | time 0.00011ms
length 4 | time 0.00005ms | length 4 | time 0.00006ms | length 4 | time 0.00017ms
length 6 | time 0.00008ms | length 6 | time 0.00010ms | length 6 | time 0.00036ms
length 9 | time 0.00013ms | length 9 | time 0.00015ms | length 9 | time 0.00069ms
length 13 | time 0.00022ms | length 13 | time 0.00028ms | length 13 | time 0.00135ms
length 19 | time 0.00037ms | length 19 | time 0.00040ms | length 19 | time 0.00245ms
length 28 | time 0.00072ms | length 28 | time 0.00060ms | length 28 | time 0.00490ms
length 42 | time 0.00127ms | length 42 | time 0.00096ms | length 42 | time 0.01018ms
length 63 | time 0.00173ms | length 63 | time 0.00179ms | length 63 | time 0.01052ms
length 94 | time 0.00280ms | length 94 | time 0.00280ms | length 94 | time 0.01522ms
length 141 | time 0.00458ms | length 141 | time 0.00479ms | length 141 | time 0.02376ms
length 211 | time 0.00731ms | length 211 | time 0.00763ms | length 211 | time 0.03648ms
length 316 | time 0.01310ms | length 316 | time 0.01436ms | length 316 | time 0.06333ms
length 474 | time 0.02116ms | length 474 | time 0.02158ms | length 474 | time 0.09121ms
length 711 | time 0.03250ms | length 711 | time 0.03387ms | length 711 | time 0.14341ms
length 1066 | time 0.05099ms | length 1066 | time 0.05305ms | length 1066 | time 0.21971ms
length 1599 | time 0.08040ms | length 1599 | time 0.08349ms | length 1599 | time 0.33692ms
length 2398 | time 0.12971ms | length 2398 | time 0.13084ms | length 2398 | time 0.51396ms
length 3597 | time 0.20300ms | length 3597 | time 0.20893ms | length 3597 | time 0.79176ms
length 5395 | time 0.32087ms | length 5395 | time 0.32491ms | length 5395 | time 1.30021ms
The latter is the one defined inside object and it's awful (about 4 times slower).
Update 1
I've run benchmark with and without scalac optimise option, and there are no noticeable differences (only slower compilation with optimise).
It's just one of many bugs in specialization--I'm not sure whether this one's been reported on the bug tracker or not. If you throw an exception from your sort, you'll see that it calls the generic version not the specialized version of the second sort:
java.lang.Exception: Boom!
at magicgoose.sorting.DualPivotQuicksortGenSingleton$.magicgoose$sorting$DualPivotQuicksortGenSingleton$$sort(DualPivotQuicksortGenSingleton.scala:33)
at magicgoose.sorting.DualPivotQuicksortGenSingleton$.sort$mFc$sp(DualPivotQuicksortGenSingleton.scala:13)
Note how the top thing on the stack is DualPivotQuicksortGenSingleton$$sort(...) instead of ...sort$mFc$sp(...)? Bad compiler, bad!
As a workaround, you can wrap your private methods inside a final helper object, e.g.
def sort[# spec T](a: Array[T]) { Helper.sort(a,0,a.length) }
private final object Helper {
def sort[#spec T](a: Array[T], i0: Int, i1: Int) { ... }
}
For whatever reason, the compiler then realizes that it ought to call the specialized variant. I haven't tested whether every specialized method that is called by another needs to be inside its own object; I'll leave that to you via the exception-throwing trick.
i'm writing a program to receive dns messages and respond an appropriate answer(a simple dns server that only reply A records).
but when i receive messages it's not like the described format in 1035 RFC.
for example this is a dns query generated by nslookup:
'\xe1\x0c\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x06google\x03com\x00\x00\x01\x00\x01'
i know about dns headers and bits as defined in 1035 RFC but why it should be in hex?
should i consider them as hex numbers or their utf-8 equivalents?
should my responses have this format too?
It's coming out as hexadecimal because it is a raw binary request, but you are presumably trying to print it out as a string. That is apparently how non-printable characters are displayed by whatever you are using to print it out; it escapes them as hex sequences.
You don't interpret this as "hex" or UTF-8 at all; you need to interpret the binary format described by the RFC. If you mention what language you're using, I (or someone else) might be able to describe to you how to handle data in a binary format like this.
Untile then, let's take a look at RFC 1035 and see how to interpret your query by hand:
The header contains the following fields:
1 1 1 1 1 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ID |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|QR| Opcode |AA|TC|RD|RA| Z | RCODE |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QDCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ANCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| NSCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ARCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
Each line there is 16 bits, so that's 12 bytes. Lets fill our first 12 bytes into there:
1 1 1 1 1 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ID = e10c | \xe1 \x0c
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| 0| Opcode=0 | 0| 0| 1| 0| Z=0 | RCODE=0 | \x01 \x00
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QDCOUNT = 1 | \x00 \x01
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ANCOUNT = 0 | \x00 \x00
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| NSCOUNT = 0 | \x00 \x00
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ARCOUNT = 0 | \x00 \x00
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
So. We have a query with ID = e10c (just an arbitrary number so the client can match queries up with responses), QR = 0 indicates that it's a query, opcode = 0 indicates that it's a standard query, AA and TC are for responses, RD = 1 indicates that recursion is desired (we are making a recursive query to our local nameserver). Z is reserved for future use, RCODE is a response code for responses. QDCOUNT = 1 indicates that we have 1 question, all the rest are numbers of different types of records in a response.
Now we come to the questions. Each has the following format:
1 1 1 1 1 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| |
/ QNAME /
/ /
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QTYPE |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QCLASS |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
QNAME is the name the query is about. The format is one octet indicating the length of a label, followed by the label, terminated by a label with 0 length.
So we have:
1 1 1 1 1 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| LEN = 6 | g | \x06 g
| o | o | o o
| g | l | g l
| e | LEN = 3 | e \x03
| c | o | c o
| m | LEN = 0 | m \x00
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QTYPE = 1 | \x00 \x01
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QCLASS = 1 | \x00 \x01
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
This indicates that the name we are looking up is google.com (sometimes written as google.com., with the empty label at the end made explicit). QTYPE = 1 is an A (IPv4 address) record. QCLASS = 1 is an IN (internet) query. So this is asking for the IPv4 address of google.com.