certutil.exe formatting the output in powershell - powershell

I am using certutil.exe to get a list of issued certificates and export them to a .txt file, the output comes back in rows even though i specify format-table, autosize or wrap options. here is the command i've used, where am I going wrong?
certutil.exe -view -restrict "Disposition=20" -out "Request.RequestID,Request.RequesterName,certificatehash,Request.SubmittedWhen" | Format-Table -autosize
here is the output: (copied the first few here)
PS C:\Users\administrator.JLR> certutil.exe -view -restrict "Disposition=20" -out "Requ Schema: Column Name Localized Name Type MaxLength
---------------------------- ---------------------------- ------ --------- Request.RequestID Request ID Long 4 -- Indexed Request.RequesterName Requester Name String 2048 -- Indexed CertificateHash Certificate Hash String 128 -- Indexed Request.SubmittedWhen Request Submission Date Date 8 -- Indexed
Row 1: Request ID: 0x12 (18) Requester Name: "JLR\QA-ADFS1$" Certificate Hash: "13 24 46 54 fe 0b 6d 30 ff b8 b8 cd 55 e6 55 eb da 7d 15 bd" Request Submission Date: 8/4/2015 10:24 AM
Row 2: Request ID: 0x15 (21) Requester Name: "JLR\svcSAM" Certificate Hash: "94 2c 41 e0 2a d6 16 fc 74 bd ba 08 16 e8 a6 1c d2 4e 7e 12" Request Submission Date: 8/4/2015 2:13 PM
Row 3: Request ID: 0x17 (23) Requester Name: "JLR\Administrator" Certificate Hash: "24 b3 78 7b 69 db dc 6c d6 65 88 1c 7f b3 c6 ef 06 db 25 9b" Request Submission Date: 8/4/2015 2:35 PM
Row 4: Request ID: 0x25 (37) Requester Name: "JLR\paul.charles" Certificate Hash: "c4 00 20 df 0e 0b 65 29 b6 b3 c4 29 fa b7 a7 c6 c2 6b 44 c7" Request Submission Date: 8/7/2015 3:31 PM

Related

PL/SQL block to proceed execution only if the row has one not null value

I have a table like below :
Group_id
Sub_Group
Number
Values
1111
5568
3A
e1*70/221*2000/8*1283*00
1111
16978
2B
e1*715/2007*692/2008A005100
1111
23547
1C
not applicable
1111
5567
2D
not applicable
1111
5567
3B
(null)
I want my block to proceed with the further steps only if only one value other than not applicable exists.
Eg. like below :
Group_id
Sub_Group
Number
Values
1111
5568
3A
e1*70/221*2000/8*1283*00
1111
5568
3A
e1*70/221*2000/8*1283*01
1111
5568
3A
e1*70/221*2000/8*1283*02
1111
16978
2B
not applicable
1111
23547
1C
not applicable
1111
5567
2D
not applicable
1111
5567
3B
not applicable
For above given example only the Number 3A has value other than not applicable. so in this case I can proceed with the further steps to print the values. So if there are more than one Number for a group with values other that not applicable I need to print the error message. But a single can have multiple values other than not applicable.

Extracting all rows containing a specific datetime value (MATLAB)

I have a table which looks like this:
Entry number
Timestamp
Value1
Value2
Value3
Value4
5758
28-06-2018 16:30
34
63
34.2
60.9
5759
28-06-2018 17:00
33.5
58
34.9
58.4
5758
28-06-2018 16:30
34
63
34.2
60.9
5759
28-06-2018 17:00
33.5
58
34.9
58.4
5760
28-06-2018 17:30
33
53
35.2
58.5
5761
28-06-2018 18:00
33
63
35
57.9
5762
28-06-2018 18:30
33
61
34.6
58.9
5763
28-06-2018 19:00
33
59
34.1
59.4
5764
28-06-2018 19:30
28
89
33.5
64.2
5765
28-06-2018 20:00
28
89
33
66.1
5766
28-06-2018 20:30
28
83
32.5
67
5767
28-06-2018 21:00
29
89
32.2
68.4
Where '28-06-2018 16:30' is under one column. So I have 6 columns:
Entry number, Timestamp, Value1, Value2, Value3, Value4
I want to extract all rows that belong to '28-06-2018', i.e all data pertaining to that day. Since my table is too large I couldn't fit more data, however, the entries under the timestamp range for a couple of months.
t=table([5758;5759],["28-06-2018 16:30";"29-06-2018 16:30"],[34;33.5],'VariableNames',{'Entry number','Timestamp','Value1'})
t =
2×3 table
Entry number Timestamp Value1
____________ __________________ ______
5758 "28-06-2018 16:30" 34
5759 "29-06-2018 16:30" 33.5
t(contains(t.('Timestamp'),"28-06"),:)
ans =
1×3 table
Entry number Timestamp Value1
____________ __________________ ______
5758 "28-06-2018 16:30" 34

Is there analog for startsWith function for byte array dataframe column in spark (scala)?

I'm trying to read hbase table with hortonworks spark hbase connector. Key of this table has Binary type, and I want to filter data as same as hbase scan does. Key has this structure: [8 bytes of UserId, 8 bytes of Timestamp, 4 bytes some other data]. Is it possible to filter like this:
df.filter($"key".startsWith(Array(x,x,x,x,x)))
to find all rows, corresponded to particular UserId?
UPD:
Hortonworks connector is actually not important thing, you can get dataframe something like this:
val df = Seq(
java.nio.ByteBuffer.allocate(20).putLong(java.lang.Long.reverse(12345678L)).putLong(11111L).putInt(1234).array(),
java.nio.ByteBuffer.allocate(20).putLong(java.lang.Long.reverse(87654321L)).putLong(222222L).putInt(2345).array(),
java.nio.ByteBuffer.allocate(20).putLong(java.lang.Long.reverse(12345678L)).putLong(333333L).putInt(3456).array()
).toDF("key")
so output will be:
scala> df.show(false)
+-------------------------------------------------------------+
|key |
+-------------------------------------------------------------+
|[72 86 3D 00 00 00 00 00 00 00 00 00 00 00 2B 67 00 00 04 D2]|
|[8D FE 9C A0 00 00 00 00 00 00 00 00 00 03 64 0E 00 00 09 29]|
|[72 86 3D 00 00 00 00 00 00 00 00 00 00 05 16 15 00 00 0D 80]|
+-------------------------------------------------------------+

Add null to the columns which are empty

I am trying to put null to the columns which are empty using perl or awk, to find the number of column , header's column count can be used. I tried to perform the solution using perl and some regex. However, the output looks very close to the desired output but if noticed carefully row number one is showing incorrect data.
Input data:
id name type foo-id zoo-id loo-id-1 moo-id-2
----- --------------- ----------- ------ ------ ------ ------
0 zoo123 soozoo 8 31 32
51 zoo213 soozoo 48 51
52 asz123 soozoo 47 52
53 asw122 soozoo 1003 53
54 fff123 soozoo 68 54
55 sss123 soozoo 75 55
56 ssd123 soozoo 76 56
Expected Output:
0 zoo123 soozoo 8 null 31 32
51 zoo213 soozoo 48 51 null null
52 asz123 soozoo 47 52 null null
53 asw122 soozoo 1003 53 null null
54 fff123 soozoo 68 54 null null
55 sss123 soozoo 75 55 null null
56 ssd123 soozoo 76 56 null null
Very close to solution but row-1 is showing incorrect data:
echo "$x"|grep -E '^[0-9]+' |perl -ne 'm/^([\d]+)(?:\s+([\w]+))?(?:\s+([-\w]+))?(?:\s+([\d]+))?(?:\s+([\d]+))?(?:\s+([\d]+))?(?:\s+([\d]+))?/;printf "%s %s %s %s %s %s %s\n", $1, $2//"null", $3//"null",$4//"null",$5//"null",$6//"null",$7//"null"' |column -t
0 zoo123 soozoo 8 31 32 null
51 zoo213 soozoo 48 51 null null
52 asz123 soozoo 47 52 null null
53 asw122 soozoo 1003 53 null null
54 fff123 soozoo 68 54 null null
55 sss123 soozoo 75 55 null null
56 ssd123 soozoo 76 56 null null
When you have a fixed-width string to parse, you'll find that unpack() is a better tool than regexes.
This should demonstrate how to do it. I'll leave it to you to convert it to a one-liner.
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
use Data::Dumper;
while (<DATA>) {
next if /^\D/; # Skip lines that don't start with a digit
# I worked out the unpack() template by counting columns.
my #data = map { /\S/ ? $_ : 'null' } unpack('A7A14A16A8A8A8A8');
say join ' ', #data;
}
__DATA__
id name type foo-id zoo-id loo-id-1 moo-id-2
----- --------------- ----------- ------ ------ ------ ------
0 zoo123 soozoo 8 31 32
51 zoo213 soozoo 48 51
52 asz123 soozoo 47 52
53 asw122 soozoo 1003 53
54 fff123 soozoo 68 54
55 sss123 soozoo 75 55
56 ssd123 soozoo 76 56
Output:
$ perl unpack | column -t
0 zoo123 soozoo 8 null 31 32
51 zoo213 soozoo 48 51 null null
52 asz123 soozoo 47 52 null null
53 asw122 soozoo 1003 53 null null
54 fff123 soozoo 68 54 null null
55 sss123 soozoo 75 55 null null
56 ssd123 soozoo 76 56 null null
With GNU awk:
awk 'NR>2{ # ignore first and second row
NF=7 # fix number of columns
for(i=1; i<=NF; i++) # loop with all columns
if($i ~ /^ *$/){ # if empty or only spaces
$i="null"
}
print $0}' FIELDWIDTHS='7 14 16 8 8 10 8' OFS='|' file | column -s '|' -t
As one line:
awk 'NR>2{NF=7; for(i=1;i<=NF;i++) if($i ~ /^ *$/){$i="null"} print $0}' FIELDWIDTHS='7 14 16 8 8 10 8' OFS='|' file | column -s '|' -t
Output:
0 zoo123 soozoo 8 null 31 32
51 zoo213 soozoo 48 51 null null
52 asz123 soozoo 47 52 null null
53 asw122 soozoo 1003 53 null null
54 fff123 soozoo 68 54 null null
55 sss123 soozoo 75 55 null null
56 ssd123 soozoo 76 56 null null
See: 8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR

DataFrame user-defined function not applied unless I change column name

I want to convert my DataFrame column using implicits functions definition.
I have my DataFrame type defined, which contains additional functions:
class MyDF(df: DataFrame) {
def bytes2String(colName: String): DataFrame = df
.withColumn(colname + "_tmp", udf((x: Array[Byte]) => bytes2String(x)).apply(col(colname)))
.drop(colname)
.withColumnRenamed(colname + "_tmp", colname)
}
Then I define my implicit conversion class:
object NpDataFrameImplicits {
implicit def toNpDataFrame(df: DataFrame): NpDataFrame = new NpDataFrame(df)
}
So finally, here is what I do in a small FunSuite unit test:
test("example: call to bytes2String") {
val df: DataFrame = ...
df.select("header.ID").show() // (1)
df.bytes2String("header.ID").withColumnRenamed("header.ID", "id").select("id").show() // (2)
df.bytes2String("header.ID").select("header.ID").show() // (3)
}
Show #1
+-------------------------------------------------+
|ID |
+-------------------------------------------------+
|[62 BF 58 0C 6C 59 48 9C 91 13 7B 97 E7 29 C0 2F]|
|[5C 54 49 07 00 24 40 F4 B3 0E E7 2C 03 B8 06 3C]|
|[5C 3E A2 21 01 D9 4C 1B 80 4E F9 92 1D 4A FE 26]|
|[08 C1 55 89 CE 0D 45 8C 87 0A 4A 04 90 2D 51 56]|
+-------------------------------------------------+
Show #2
+------------------------------------+
|id |
+------------------------------------+
|62bf580c-6c59-489c-9113-7b97e729c02f|
|5c544907-0024-40f4-b30e-e72c03b8063c|
|5c3ea221-01d9-4c1b-804e-f9921d4afe26|
|08c15589-ce0d-458c-870a-4a04902d5156|
+------------------------------------+
Show #3
+-------------------------------------------------+
|ID |
+-------------------------------------------------+
|[62 BF 58 0C 6C 59 48 9C 91 13 7B 97 E7 29 C0 2F]|
|[5C 54 49 07 00 24 40 F4 B3 0E E7 2C 03 B8 06 3C]|
|[5C 3E A2 21 01 D9 4C 1B 80 4E F9 92 1D 4A FE 26]|
|[08 C1 55 89 CE 0D 45 8C 87 0A 4A 04 90 2D 51 56]|
+-------------------------------------------------+
As you can witness here, the third show (aka without the column renaming) does not work as expected and shows us a non-converted ID column. Anyone knows why?
EDIT:
Output of df.select(col("header.ID") as "ID").bytes2String("ID").show():
+------------------------------------+
|ID |
+------------------------------------+
|62bf580c-6c59-489c-9113-7b97e729c02f|
|5c544907-0024-40f4-b30e-e72c03b8063c|
|5c3ea221-01d9-4c1b-804e-f9921d4afe26|
|08c15589-ce0d-458c-870a-4a04902d5156|
+------------------------------------+
Let me explain, what is happening on your conversion function with bellow example.
First Create data frame:
val jsonString: String =
"""{
| "employee": {
| "id": 12345,
| "name": "krishnan"
| },
| "_id": 1
|}""".stripMargin
val jsonRDD: RDD[String] = sc.parallelize(Seq(jsonString, jsonString))
val df: DataFrame = sparkSession.read.json(jsonRDD)
df.printSchema()
Output structure:
root
|-- _id: long (nullable = true)
|-- employee: struct (nullable = true)
| |-- id: long (nullable = true)
| |-- name: string (nullable = true)
Conversion function similar to your's:
def myConversion(myDf: DataFrame, colName: String): DataFrame = {
myDf.withColumn(colName + "_tmp", udf((x: Long) => (x+1).toString).apply(col(colName)))
.drop(colName)
.withColumnRenamed(colName + "_tmp", colName)
}
Scenario 1#
Do the conversion for root level field.
myConversion(df, "_id").show()
myConversion(df, "_id").select("_id").show()
Result:
+----------------+---+
| employee|_id|
+----------------+---+
|[12345,krishnan]| 2|
|[12345,krishnan]| 2|
+----------------+---+
+---+
|_id|
+---+
| 2|
| 2|
+---+
Scenario 2# do the conversion for employee.id. Here, when we use employee.id means, data frame got added with new field id at root level. This is the correct behavior.
myConversion(df, "employee.id").show()
myConversion(df, "employee.id").select("employee.id").show()
Result:
+---+----------------+-----------+
|_id| employee|employee.id|
+---+----------------+-----------+
| 1|[12345,krishnan]| 12346|
| 1|[12345,krishnan]| 12346|
+---+----------------+-----------+
+-----+
| id|
+-----+
|12345|
|12345|
+-----+
Scenario 3# Select the inner field to root level and then perform conversion.
myConversion(df.select("employee.id"), "id").show()
Result:
+-----+
| id|
+-----+
|12346|
|12346|
+-----+
My new conversion function, takes struct type field and perform conversion and store it into struct type field itself. Here, pass employee field and convert the id field alone, but changes are done field employee at root level.
case class Employee(id: String, name: String)
def myNewConversion(myDf: DataFrame, colName: String): DataFrame = {
myDf.withColumn(colName + "_tmp", udf((row: Row) => Employee((row.getLong(0)+1).toString, row.getString(1))).apply(col(colName)))
.drop(colName)
.withColumnRenamed(colName + "_tmp", colName)
}
Your scenario number 3# using my conversion function.
myNewConversion(df, "employee").show()
myNewConversion(df, "employee").select("employee.id").show()
Result#
+---+----------------+
|_id| employee|
+---+----------------+
| 1|[12346,krishnan]|
| 1|[12346,krishnan]|
+---+----------------+
+-----+
| id|
+-----+
|12346|
|12346|
+-----+