custom encoder for Data - swift

I got a series of data as bytes, I tried to encode it
let encoder = JSONEncoder()
encoder.dataEncodingStrategy = .deferredToData
let encodedData = try encoder.encode(data)
let data = encodedData.compactMap { $0 }
my data is looks like that:
[ 91,91,48, 44, 49, 57, 51, ..., 49, 44, 48, 93, 93]
It works, but the outcome is not what I expected (the numbers inside the data.element are different), then I tried to change the encoding strategy.
encoder64.dataEncodingStrategy = .base64
let data = encodedData.compactMap { $0 }
[91, 34, 65, 77, 72, ..., 65, 61, 61, 34, 93]
Then I have a different results, but still not is the what that I expected.
there is another custom encoding strategy, could you please give me some example of this custom strategy that I can test another forms to compare the results.
the example data that I expected is like :
[ 0, 193, 193, 193, 193, 72, 20, 193, ..., 255, 91, 0]
the 0 in the beginning and in the end is so important to me.
Thank you so much

The output is correct.
The JSONEncoder creates a JSON string from the data and compactMap maps each character to its UInt8 value
91 is [
91 is [
48 is 0
44 is ,
49 is 1
57 is 9
51 is 3
...
49 is 1
44 is ,
48 is 0
93 is ]
93 is ]
And consider the different representations: 91hex is 145dec

Related

Convert from decimal to ASCII the data present in a column of a .csv file

i should convert the data of a column of a csv file in powershell
This data is often in decimal and I would need to convert it to ASCII as in the example
80, 77, 79, 32, 83, 50, 52, 50, 45, 86, 70, 67 -> to Ascii > PMO S242-VFC
the table is composed as follows
Monitor
Modello
wkst1
80, 77, 79, 32, 83, 50, 52, 50, 45, 86, 70, 6
wkst2
V246HL
wkst3
V256IL
wkst4
65, 99, 101, 114, 32, 86, 50, 52, 54, 72, 76
this is the result
Monitor
Modello
wkst1
PMO S242-VFC
wkst2
V246HL
wkst3
V256IL
wkst4
Acer V246HL
Thanks
You need to iterate your CSV file rows, and check each row, if the row is a Byte Data row, convert it using: [System.Text.Encoding]::ASCII.GetString
So for example:
foreach ($row in Import-Csv c:\filepath.csv)
{
if ($row.Modello -is [array] # just an example, needs more validation
{
[System.Text.Encoding]::ASCII.GetString([byte]$Row.Modello) # save it somewhere
}
}
As you did not provided any code or showing your work, I'm just showing an example of what you can probably do,

How can I convert a compressed .gz file that I'm getting from a bluetooth Low-energy device to an actual decompressed file in Swift 4.2, using gzip?

I am very new to Xcode and iOS, I have a device, let's call it Brains, that I'm connecting to via Bluetooth LE using an app I built with Swift 4 and Xcode 10 on my iPhone 5, call it Body. Brains is similar to an arduino board, but not exactly. I can connect and get all the data with BLE with no problems, until I tried to get a compressed file filled with json strings.
I am receiving the compressed bytes but I can't seem to know what to do next. How can I get the compressed file, decompress it and read the data inside?
I have tried many things from using the Modules: GzipSwift,
DataCompression and SSZipArchive
I have used gunzipped(), gunzip() and decompress() but none of them seem to work.
I have read this thread: iOS :: How to decompress .gz file using GZIP Utility? and it say that I have to get all the compressed bytes stream and convert that to NSData and then decompress it, trouble is he's Using objective-c and I cant seem to translate into swift 4.
I'm getting the bytes from the Bluetooth LE characteristic in a [UInt8] array, in this function:
func received_logs(data: [UInt8]) {
let data_array_example = [31, 139, 8, 8, 16, 225, 156, 92, 2, 255, 68, 97, 116, 97, 0, 181, 157, 107, 110, 220, 56, 16, 6, 175, 226, 3, 248, 71, 63, 73, 234, 44, 193, 222, 255, 26, 171, 30, 35, 192, 90, 20, 18, 121, 182, 11, 112, 16, 35, 48, 10, 31, 154, 197, 22, 135, 34, 227, 95, 191, 76, 244, 16, 183, 248, 252, 48, 137, 229, 38, 242, 249, 161, 231, 87, 156, 127, 207, 113, 126, 227, 159, 31, 231, 183, 110, 223, 255, 200, 239, 47, 203, 252, 253, 173, 255, 231, 159, 235, 235, 108, 105, 110, 101, 48, 47, 50, 48]
for data_byte in stride(from: 0, to: data_array_example.count, by: 1) {
let byte = String(data_array_example[data_byte])
sourceString = sourceString + byte //getting all the bytes and converting to string to store in a variable
}
/******************************************************************/
let text = sourceBuffer
do {
try text.write(to: path!, atomically: false, encoding: String.Encoding.utf8)
}
catch {
print("Failed writing")
} //dump the var into a txt file
/**********UPDATED**********/
var file_array : [UInt8] = []
let byte2 = NSData(data: data_array_example.data)
let asc_array = Data(bytes: byte2.data)
let decompressedData: Data
do {
try decompressedData = asc.gunzipped()
print("Decom: ", String(data: decompressedData, encoding: .utf8))
}
catch {
print(error) //Gives me the "unknown compression method error"
}
}
I expect to see the Uncompressed file's contents but I only get:
GzipError(kind: Gzip.GzipError.Kind.data, message: "incorrect header check")
Maybe I'm just making it more complicated than It needs to be. Any help would be greatly appreciated!
Thank you very much :)
UPDATE:
I created a .gz file and used the both the gunzipped() and gunzip() functions and both of them worked.
UPDATE:
Tried to directly convert the data to NSData and then gunzip() but now getting the error:
GzipError(kind: Gzip.GzipError.Kind.data, message: "unknown compression method")
The updated example data has a correct gzip header, and so would not be giving you an incorrect header check if you are feeding the data correctly to the gunzipper.
I solve my issue. Turns out I was miscounting the bytes and some of them were in the wrong order. Thank you guys for your help!

Why connects geom_line not to next point when using in gganimate?

When I have this data frame
library(ggplot)
library(gganimate)
data <- tribble(
~year, ~num,
1950, 56,
1951, 59,
1952, 64,
1953, 76,
1954, 69,
1955, 74,
1956, 78,
1957, 98,
1958, 85,
1959, 88,
1960, 91,
1961, 87,
1962, 99,
1963, 104
)
and want to make an animated line plot with gganimate:
ggplot(data, aes(year, num))+geom_point()+geom_line()+transition_reveal(year, num)
I get a diagram, in which points and lines are drawn in the wrong sequence.
What is the reason for this and how can I correct it?
In
transition_reveal()
the first argument (id) regards the group aesthetic (which you don't have). I found that just using id = 1 for a single time series works.
The second argument (along) should be your x aesthetic (in your case the year).
Try:
ggplot(data, aes(year, num))+
geom_point()+
geom_line()+
transition_reveal(1, year)

ZXing truncading negative bytes

In ZXing i'm creating a string of a binary data using the encode "ISO-8859-1"
but somehow negative bytes in the data get truncated to byte 63 when reading the produced QR code
Example: String before QR code (as bytes)
-78, 99, -86, 15, -123, 31, -11, -64, 77, -91, 26, -126, -68, 33
String read from QR code:
63, 99, 63, 15, 63, 31, 63, 63, 77, 63, 26, 63, 63, 33
How do I prevent that without using base64?
For some reason ZXing assembles the QR matrix with the correct data, it's the reading that truncates the bytes. I ended up sidestepping the problem by encoding my binary data to base64 and dealing with the increased message size

How can I read an hex number with dlmread?

I'm trying to read a .csv file with Octave (I suppose it's equivalent on Matlab). One of the columns contains hexadecimal values identifying MAC addresses, but I'd like to have it parsed anyway, I don't mind if it's converted to decimal.
Is it possible to do this automatically with functions such as dlmread? Or do I have to create a custom function?
This is how the file looks like:
Timestamp, MAC, LastBsn, PRR, RSSI, ED, SQI, RxGain, PtxCoord, Channel: 26
759, 0x35c8cc, 127, 99, -307, 29, 237, 200, -32
834, 0x32d710, 183, 100, -300, 55, 248, 200, -32
901, 0x35c8cc, 227, 100, -300, 29, 238, 200, -32
979, 0x32d6a0, 22, 95, -336, 10, 171, 200, -32
987, 0x32d710, 27, 96, -328, 54, 249, 200, -32
1054, 0x35c8cc, 71, 92, -357, 30, 239, 200, -32
1133, 0x32d6a0, 122, 95, -336, 11, 188, 200, -32
I can accept any output value for the (truncated) MAC addresses, from sequence numbers (1-6) to decimal conversion of the value (e.g. 0x35c8cc -> 3524812).
My current workaround is to use a text editor to manually replace the MAC addresses with decimal numbers, but an automated solution would be handy.
The functions dlmread and csvread will handle numeric files. You can use textscan (which is also present in Matlab), but since you're using Octave, you're better off using csv2cell (part of Octave's io package). It basically reads a csv file and returns a cell array of strings and doubles:
octave-3.8.1> type test.csv
1,2,3,"some",1c:6f:65:90:6b:13
4,5,6,"text",0d:5a:89:46:5c:70
octave-3.8.1> plg load io; # csv2cell is part of the io package
octave-3.8.1> data = csv2cell ("test.csv")
data =
{
[1,1] = 1
[2,1] = 4
[1,2] = 2
[2,2] = 5
[1,3] = 3
[2,3] = 6
[1,4] = some
[2,4] = text
[1,5] = 1c:6f:65:90:6b:13
[2,5] = 0d:5a:89:46:5c:70
}
octave-3.8.1> class (data{1})
ans = double
octave-3.8.1> class (data{9})
ans = char
>> type mycsv.csv
Timestamp, MAC, LastBsn, PRR, RSSI, ED, SQI, RxGain, PtxCoord, Channel: 26
759, 0x35c8cc, 127, 99, -307, 29, 237, 200, -32
834, 0x32d710, 183, 100, -300, 55, 248, 200, -32
901, 0x35c8cc, 227, 100, -300, 29, 238, 200, -32
979, 0x32d6a0, 22, 95, -336, 10, 171, 200, -32
987, 0x32d710, 27, 96, -328, 54, 249, 200, -32
1054, 0x35c8cc, 71, 92, -357, 30, 239, 200, -32
1133, 0x32d6a0, 122, 95, -336, 11, 188, 200, -32
You can read the file with csv2cell. The values starting with "0x" will be automatically converted from hex to decimal values. See:
>> pkg load io % load io package for csv2cell
>> data = csv2cell ("mycsv.csv");
>> data(2,1)
ans =
{
[1,1] = 759
}
To access the cell values use:
>> data{2,1}
ans = 759
>> data{2,2}
ans = 3524812
>> data{2,5}
ans = -307