apply encoding algorithm from original and encoded string onto new string - encoding

I have an ecoded string and the original string.
Is it possible to apply the used algorithm without knowing it onto another string?
Those are the strings:
num1 = "1193"
encoded1 = "7ecb452bee6a1b227bc3"
num2 = "1259"
encoded2 = "7ecfb3daff5acd1b7ab2"
If you know how these strings were encrypted that would also make it easier :D

Related

How to convert String to UTF-8 to Integer in Swift

I'm trying to take each character (individual number, letter, or symbol) from a string file name without the extension and put each one into an array index as an integer of the utf-8 code (i.e. if the file name is "A1" without the extension, I would want "A" as an int "41" in first index, and "1" as int "31" in second index)
Here is the code I have but I'm getting this error "No exact matches in call to instance method 'append'", my guess is because .utf8 still keeps it as a string type:
for i in allNoteFiles {
var CharacterArray : [Int] = []
for character in i {
var utf8Character = String(character).utf8
CharacterArray.append(utf8Character) //error is here
}
....`//more code down here within the for in loop using CharacterArray indexes`
I'm sure the answer is probably simple, but I'm very new to Swift.
I've tried appending var number instead with:
var number = Int(utf8Character)
and
var number = (utf8Character).IntegerValue
but I get errors "No exact matches in call to initializer" and "Value of type 'String.UTF8View' has no member 'IntegerValue'"
Any help at all would be greatly appreciated. Thanks!
The reason
var utf8Character = String(character).utf8
CharacterArray.append(utf8Character)
doesn't work for you is because utf8Character is not a single integer, but a UTF8View: a lightweight way to iterate over the UTF-8 codepoints in a string. Every Character in a String can be made up of any number of UTF-8 bytes (individual integers) — while ASCII characters like "A" and "1" map to a single UTF-8 byte, the vast majority of characters do not: every UTF-8 code point maps to between 1 and 4 individual bytes. The Encoding section of UTF-8 on Wikipedia has a few very illustrative examples of how this works.
Now, assuming that you do want to split a string into individual UTF-8 bytes (either because you can guarantee your original string is ASCII-only, so the assumption that "character = byte" holds, or because you actually care about the bytes [though this is rarely the case]), there's a short and idiomatic solution to what you're looking for.
String.UTF8View is a Sequence of UInt8 values (individual bytes), and as such, you can use the Array initializer which takes a Sequence:
let characterArray: [UInt8] = Array(i.utf8)
If you need an array of Int values instead of UInt8, you can map the individual bytes ahead of time:
let characterArray: [Int] = Array(i.utf8.lazy.map { Int($0) })
(The .lazy avoids creating and storing an array of values in the middle of the operation.)
However, do note that if you aren't careful (e.g., your original string is not ASCII), you're bound to get very unexpected results from this operation, so keep that in mind.

convert ByteArray to String to ByteArray

I want to convert ByteArray to string and then convert the string to ByteArray,But while converting values changed. someone help to solve this problem.
person.proto:
syntax = "proto3";
message Person{
string name = 1;
int32 age = 2;
}
After sbt compile it gives case class Person (created by google protobuf while compiling)
My MainClass:
val newPerson = Person(
name = "John Cena",
age = 44 //output
)
println(newPerson.toByteArray) //[B#50da041d
val l = newPerson.toByteArray.toString
println(l) //[B#7709e969
val l1 = l.getBytes
println(l1) //[B#f44b405
why the values changed?? how to convert correctly??
[B#... is the format that a JVM byte array's .toString returns, and is just [B (which means "byte array") and a hex-string which is analogous to the memory address at which the array resides (I'm deliberately not calling it a pointer but it's similar; the precise mapping of that hex-string to a memory address is JVM-dependent and could be affected by things like which garbage collector is in use). The important thing is that two different arrays with the same bytes in them will have different .toStrings. Note that in some places (e.g. the REPL), Scala will instead print something like Array(-127, 0, 0, 1) instead of calling .toString: this may cause confusion.
It appears that toByteArray emits a new array each time it's called. So the first time you call newPerson.toByteArray, you get an array at a location corresponding to 50da041d. The second time you call it you get a byte array with the same contents at a location corresponding to 7709e969 and you save the string [B#7709e969 into the variable l. When you then call getBytes on that string (saving it in l1), you get a byte array which is an encoding of the string "[B#7709e969" at the location corresponding to f44b405.
So at the locations corresponding to 50da041d and 7709e969 you have two different byte arrays which happen to contain the same elements (those elements being the bytes in the proto representation of newPerson). At the location corresponding to f44b405 you have a byte array where the bytes encode (in some character set, probably UTF-16?) [B#7709e969.
Because a proto isn't really a string, there's no general way to get a useful string (depending on what definition of useful you're dealing with). You could try interpreting a byte array from toByteArray as a string with a given character encoding, but there's no guarantee that any given proto will be valid in an arbitrary character encoding.
An encoding which is purely 8-bit, like ISO-8859-1 is guaranteed to at least be decodable from a byte array, but there could be non-printable or control characters, so it's not likely to that useful:
val iso88591Representation = new String(newPerson.toByteArray, java.nio.charset.StandardCharsets.ISO_8859_1)
Alternatively, you might want a representation like how the Scala REPL will (sometimes) render it:
"Array(" + newPerson.toByteArray.mkString(", ") + ")"

How do I parse out a number from this returned XML string in python?

I have the following string:
{\"Id\":\"135\",\"Type\":0}
The number in the Id field will vary, but will always be an integer with no comma separator. I'm not sure how to get just that value from that string given that it's string data type and not real "XML". I was toying with the replace() function, but the special characters are making it more complex than it seems it needs to be.
is there a way to convert that to XML or something that I can reference the Id value directly?
Maybe use a regular expression, e.g.
import re
txt = "{\"Id\":\"135\",\"Type\":0}"
x = re.search('"Id":"([0-9]+)"', txt)
if x:
print(x.group(1))
gives
135
It is assumed here that the ids are numeric and consist of at least one digit.
Non-regex answer as you asked
\" is an escape sequence in python.
So if {\"Id\":\"135\",\"Type\":0} is a raw string and if you put it into a python variable like
a = '{\"Id\":\"135\",\"Type\":0}'
gives
>>> a
'{"Id":"135","Type":0}'
OR
If the above string is python string which has \" which is already escaped, then do a.replace("\\","") which will give you the string without \.
Now just load this string into a dict and access element Id like below.
import json
d = json.loads(a)
d['Id']
Output :
135

How do I format a string from a string with %# in Swift

I am using Swift 4.2. I am getting extraneous characters when formatting one string (s1) from another string(s0) using the %# format code.
I have searched extensively for details of string formatting but have come up with only partial answers including the code in the second line below. I need to be able to format s1 so that I can customize output from a Swift process. I ask this because I have not found an answer while searching for ways to format a string from a string.
I tried the following three statements:
let s0:[String] = ["abcdef"]
let s1:[String] = [String(format:"%#",s0)]
print(s1)
...
The output is shown below. It may not be clear, here, but there are four leading spaces to the left of the abcdef string.
["(\n abcdef\n)"]
How can I format s1 so it does not include the brackets, the \n escape characters, and the leading spaces?
The issue here is you are using an array but a string in s0.
so the following index will help you.
let s0:[String] = ["abcdef"]
let s1:[String] = [String(format:" %#",s0[0])]
I am getting extraneous characters when formatting one string (s1) from another string (s0) ...
The s0 is not a string. It is an array of strings (i.e. the square brackets of [String] indicate an array and is the same as saying Array<String>). And your s1 is also array, but one that that has one element, whose value is the string representation of the entire s0 array of strings. That’s obviously not what you intended.
How can I format s1 so it does not include the brackets, the \n escape characters, and the leading spaces?
You’re getting those brackets because s1 is an array. You’re getting the string with the \n and spaces because its first value is the string representation of yet another array, s0.
So, if you’re just trying to format a string, s0, you can do:
let s0: String = "abcdef"
let s1: String = String(format: "It is ‘%#’", s0)
Or, if you really want an array of strings, you can call String(format:) for each using the map function:
let s0: [String] = ["abcdef", "ghijkl"]
let s1: [String] = s0.map { String(format: "It is ‘%#’", $0) }
By the way, in the examples above, I didn’t use a string format of just %#, because that doesn’t accomplish anything at all, so I assumed you were formatting the string for a reason.
FWIW, we generally don’t use String(format:) very often. Usually we do “string interpolation”, with \( and ):
let s0: String = "abcdef"
let s1: String = "It is ‘\(s0)’"
Get rid of all the unneccessary arrays and let the compiler figure out the types:
let s0 = "abcdef" // a string
let s1 = String(format:"- %# -",s0) // another string
print(s1) // prints "- abcdef -"

ColdFusion Hash

I'm trying to create a password digest with this formula to get the following variables and my code is just not matching. Not sure what I'm doing wrong, but I'll admit when I need help. Hopefully someone is out there who can help.
Formula from documentation: Base64(SHA1(NONCE + TIMESTAMP + SHA1(PASSWORD)))
Correct Password Digest Answer: +LzcaRc+ndGAcZIXmq/N7xGes+k=
ColdFusion Code:
<cfSet PW = "AMADEUS">
<cfSet TS = "2015-09-30T14:12:15Z">
<cfSet NONCE = "secretnonce10111">
<cfDump var="#ToBase64(Hash(NONCE & TS & Hash(PW,'SHA-1'),'SHA-1'))#">
My code outputs:
Njk0MEY3MDc0NUYyOEE1MDMwRURGRkNGNTVGOTcyMUI4OUMxM0U0Qg==
I'm clearly doing something wrong, but for the life of me cannot figure out what. Anyone? Bueller?
The fun thing about hashing is that even if you start with the right string(s), the result can still be completely wrong, if those strings are combined/encoded/decoded incorrectly.
The biggest gotcha is that most of these functions actually work with the binary representation of the input strings. So how those strings are decoded makes a big difference. Notice the same string produces totally different binary when decoded as UTF-8 versus Hex? That means the results of Hash, ToBase64, etcetera will be totally different as well.
// Result: UTF-8: 65-65-68-69
writeOutput("<br>UTF-8: "& arrayToList(charsetDecode("AADE", "UTF-8"), "-"));
// Result: HEX: -86--34
writeOutput("<br>HEX: "& arrayToList(binaryDecode("AADE", "HEX"), "-"));
Possible Solution:
The problem with the current code is that ToBase64 assumes the input string is encoded as UTF-8. Whereas Hash() actually returns a hexadecimal string. So ToBase64() decodes it incorrectly. Instead, use binaryDecode and binaryEncode to convert the hash from hex to base64:
resultAsHex = Hash( NONCE & TS & Hash(PW,"SHA-1"), "SHA-1");
resultAsBase64 = binaryEncode(binaryDecode(resultAsHex, "HEX"), "base64");
writeDump(resultAsBase64);
More Robust Solution:
Having said that, be very careful with string concatenation and hashing. As it does not always yield the expected results. Without knowing more about this specific API, I cannot be completely certain what it expects. However, it is usually safer to only work with the binary values. Unfortunately, CF's ArrayAppend() function lacks support for binary arrays, but you can easily use Apache's ArrayUtils class, which is bundled with CF.
ArrayUtils = createObject("java", "org.apache.commons.lang.ArrayUtils");
// Combine binary of NONCE + TS
nonceBytes = charsetDecode(NONCE, "UTF-8");
timeBytes = charsetDecode(TS, "UTF-8");
combinedBytes = ArrayUtils.addAll(nonceBytes, timeBytes);
// Combine with binary of SECRET
secretBytes = binaryDecode( Hash(PW,"SHA-1"), "HEX");
combinedBytes = ArrayUtils.addAll(combinedBytes, secretBytes);
// Finally, HASH the binary and convert to base64
resultAsHex = hash(combinedBytes, "SHA-1");
resultAsBase64 = binaryEncode(binaryDecode(resultAsHex, "hex"), "base64");
writeDump(resultAsBase64);