Related
I'm using Debezium and Kafka connect to get PostgreSQL's change events. UPDATE and CREATE events work fine but the payload's value is NULL when I delete something from a table:
{
topic: 'omnichannel.public.Department',
partition: 0,
message: {
magicByte: 2,
attributes: 0,
timestamp: '1668117212311',
offset: '12',
key: Buffer(272) [Uint8Array] [
123, 34, 115, 99, 104, 101, 109, 97, 34, 58, 123, 34,
116, 121, 112, 101, 34, 58, 34, 115, 116, 114, 117, 99,
116, 34, 44, 34, 102, 105, 101, 108, 100, 115, 34, 58,
91, 123, 34, 116, 121, 112, 101, 34, 58, 34, 115, 116,
114, 105, 110, 103, 34, 44, 34, 111, 112, 116, 105, 111,
110, 97, 108, 34, 58, 102, 97, 108, 115, 101, 44, 34,
102, 105, 101, 108, 100, 34, 58, 34, 110, 97, 109, 101,
115, 112, 97, 99, 101, 73, 100, 34, 125, 44, 123, 34,
116, 121, 112, 101,
... 172 more items
],
value: null,
headers: {},
isControlRecord: false,
batchContext: {
firstOffset: '11',
firstTimestamp: '1668117212311',
partitionLeaderEpoch: 0,
inTransaction: false,
isControlBatch: false,
lastOffsetDelta: 1,
producerId: '-1',
producerEpoch: -1,
firstSequence: -1,
maxTimestamp: '1668117212311',
timestampType: 0,
magicByte: 2
}
},
heartbeat: [Function: heartbeat],
pause: [Function: pause]
}
The problem is that PostgreSQL 15 is not supported in Debezium, Using Debezium 2.1 fixed the problem.
I have this code written in dart(Flutter) that implements an SSH client using this package SSH.
import 'package:flutter/material.dart';
import 'package:ssh/ssh.dart';
void main() => runApp(Home());
class Home extends StatefulWidget {
#override
_HomeState createState() => _HomeState();
}
class _HomeState extends State<Home> {
String out = "";
void send() async {
setState(() {
out = "";
print(out.codeUnits);
});
SSHClient client = new SSHClient(
host: "192.168.10.88",
port: 22,
username: "Jesus",
passwordOrKey: "******",
);
await client.connect();
await client.startShell(
ptyType: "vanilla",
callback: (dynamic res) {
setState(() {
out += res;
print(res.codeUnits);
/*
print output:
[27, 91, 50, 74, 27, 91, 109, 27, 91, 72, 77, 105, 99, 114, 111, 115, 111, 102, 116, 32, 87, 105, 110, 100, 111, 119, 115, 32, 91, 86, 101, 114, 115, 105, 111, 110, 32, 49, 48, 46, 48, 46, 49, 57, 48, 52, 49, 46, 51, 56, 56, 93, 27, 93, 48, 59, 67, 58, 92, 87, 73, 78, 68, 79, 87, 83, 92, 115, 121, 115, 116, 101, 109, 51, 50, 92, 99, 111, 110, 104, 111, 115, 116, 46, 101, 120, 101, 7, 27, 91, 63, 50, 53, 104, 10]
[40, 99, 41, 32, 50, 48, 50, 48, 32, 77, 105, 99, 114, 111, 115, 111, 102, 116, 32, 67, 111, 114, 112, 111, 114, 97, 116, 105, 111, 110, 46, 32, 65, 108, 108, 32, 114, 105, 103, 104, 116, 115, 32, 114, 101, 115, 101, 114, 118, 101, 100, 46, 10]
[27, 91, 53, 50, 88, 10]
*/
});
},
);
//await client.writeToShell("cd\r\n");
Future.delayed(
const Duration(seconds: 5),
() async => await client.closeShell(),
);
}
#override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: Text("SSH")),
body: Container(
child: Column(
children: <Widget>[
FlatButton(
child: Text("Send Command"),
onPressed: send,
),
Text(out),
],
),
),
),
);
}
}
However, when i run the program and start the client, the output from the emulated shell is garbage, like so
Image
.
I think i need to do some kind of decoding to interpret the escape character "27", but i don't know how.
We use EFK-stack, where F stands for fluent bit. In my kotlin spring boot application I configured logging as follows with logback and logstash
<appender name="STDOUT_JSON" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder" >
<timestampPattern>yyyy-MM-dd' 'HH:mm:ss.SSS</timestampPattern>
<fieldNames>
<timestamp>timestamp</timestamp>
<logger>logger</logger>
<version>[ignore]</version>
</fieldNames>
</encoder>
</appender>
We run that application in kubernetes. Now, sometimes for very verbose exceptions i.e. folowing, we see in kibana not parsed log entry. So there is neither logger detected by kibana, nor message, although those fields are in json.
{"timestamp":"2018-07-11 12:59:40.973","message":"Container exception","logger":"org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer","thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","level":"ERROR","level_value":40000,"stack_trace":"org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition offer-mail-crawler-new-mails-2 at offset 2. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Can't deserialize data [[123, 34, 101, 118, 101, 110, 116, 73, 100, 34, 58, 34, 98, 51, 57, 100, 49, 102, 54, 49, 45, 99, 57, 51, 53, 45, 52, 48, 52, 53, 45, 57, 52, 51, 51, 45, 98, 49, 100, 98, 98, 54, 97, 57, 49, 48, 49, 53, 34, 44, 34, 101, 118, 101, 110, 116, 84, 105, 109, 101, 34, 58, 123, 34, 110, 97, 110, 111, 34, 58, 50, 56, 51, 49, 50, 57, 48, 48, 48, 44, 34, 101, 112, 111, 99, 104, 83, 101, 99, 111, 110, 100, 34, 58, 49, 53, 51, 49, 51, 49, 50, 48, 56, 48, 125, 44, 34, 101, 118, 101, 110, 116, 86, 101, 114, 115, 105, 111, 110, 34, 58, 34, 50, 48, 49, 56, 45, 48, 55, 45, 49, 49, 34, 44, 34, 115, 104, 97, 114, 101, 100, 77, 97, 105, 108, 98, 111, 120, 34, 58, 34, 111, 102, 102, 101, 114, 115, 46, 116, 101, 115, 116, 64, 97, 107, 101, 108, 105, 117, 115, 46, 100, 101, 34, 44, 34, 97, 122, 117, 114, 101, 83, 116, 111, 114, 97, 103, 101, 77, 97, 105, 108, 65, 115, 69, 109, 108, 66, 108, 111, 98, 78, 97, 109, 101, 34, 58, 34, 55, 97, 98, 54, 49, 57, 52, 97, 45, 99, 57, 101, 98, 45, 52, 55, 99, 53, 45, 56, 53, 54, 51, 45, 56, 52, 54, 54, 53, 48, 99, 51, 52, 57, 99, 48, 47, 109, 105, 109, 101, 45, 99, 111, 110, 116, 101, 110, 116, 46, 101, 109, 108, 34, 44, 34, 97, 122, 117, 114, 101, 83, 116, 111, 114, 97, 103, 101, 65, 116, 116, 97, 99, 104, 109, 101, 110, 116, 66, 108, 111, 98, 78, 97, 109, 101, 115, 34, 58, 91, 93, 44, 34, 102, 114, 111, 109, 34, 58, 34, 82, 111, 109, 97, 110, 46, 84, 117, 99, 104, 105, 110, 64, 97, 107, 101, 108, 105, 117, 115, 46, 100, 101, 34, 44, 34, 115, 117, 98, 106, 101, 99, 116, 34, 58, 34, 116, 101, 115, 116, 34, 125]] from topic [new-mails]
Caused by: com.fasterxml.jackson.module.kotlin.MissingKotlinParameterException: Instantiation of [simple type, class com.akelius.crawledmails.NewMailEvent] value failed for JSON property azureStorageMailUuid due to missing (therefore NULL) value for creator parameter azureStorageMailUuid which is a non-nullable type
at [Source: [B#66872193; line: 1, column: 350] (through reference chain: com.akelius.crawledmails.NewMailEvent["azureStorageMailUuid"])
at com.fasterxml.jackson.module.kotlin.KotlinValueInstantiator.createFromObjectWith(KotlinValueInstantiator.kt:53)
at com.fasterxml.jackson.databind.deser.impl.PropertyBasedCreator.build(PropertyBasedCreator.java:138)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:471)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1191)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:314)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:148)
at com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1626)
at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1237)
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:86)
at org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:65)
at org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:55)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:918)
at org.apache.kafka.clients.consumer.internals.Fetcher.access$2600(Fetcher.java:93)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1095)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:944)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:567)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:528)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:628)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
"}
I have a function cal_power that called many times (3000times) in my main project. It uses the loop-up table way to perform computing the power of a scale number (follows some rule to make the table). The TABLE is a vector 1x510.
In my current solution, for each time the function cal_power is called, the TABLE is initialization again, thus it takes some waste time. We know that the value in TABLE is fixed, do we have anyway in MATLAB to initialize the value of TABLE just one time, and it can access anywhere? I tried to use a global variable but it spends more time than my current solution. Thanks
function p = cal_power( ii )
% Input: ii: Integer in {0,255} (forced to be, if not)
% Output:
% p = TABLE( mod( ii, 255) + 1 );
% TABLE : look-up table
TABLE = [1, 2, 4, 8, 16, 32, 64, 128, 29, 58, 116, 232, 205, 135, 19, 38, 76,...
152, 45, 90, 180, 117, 234, 201, 143, 3, 6, 12, 24, 48, 96, 192, 157,...
39, 78, 156, 37, 74, 148, 53, 106, 212, 181, 119, 238, 193, 159, 35,...
70, 140, 5, 10, 20, 40, 80, 160, 93, 186, 105, 210, 185, 111, 222,...
161, 95, 190, 97, 194, 153, 47, 94, 188, 101, 202, 137, 15, 30, 60,...
120, 240, 253, 231, 211, 187, 107, 214, 177, 127, 254, 225, 223, 163,...
91, 182, 113, 226, 217, 175, 67, 134, 17, 34, 68, 136, 13, 26, 52,...
104, 208, 189, 103, 206, 129, 31, 62, 124, 248, 237, 199, 147, 59,...
118, 236, 197, 151, 51, 102, 204, 133, 23, 46, 92, 184, 109, 218,...
169, 79, 158, 33, 66, 132, 21, 42, 84, 168, 77, 154, 41, 82, 164, 85,...
170, 73, 146, 57, 114, 228, 213, 183, 115, 230, 209, 191, 99, 198,...
145, 63, 126, 252, 229, 215, 179, 123, 246, 241, 255, 227, 219, 171,...
75, 150, 49, 98, 196, 149, 55, 110, 220, 165, 87, 174, 65, 130, 25,...
50, 100, 200, 141, 7, 14, 28, 56, 112, 224, 221, 167, 83, 166, 81,...
162, 89, 178, 121, 242, 249, 239, 195, 155, 43, 86, 172, 69, 138, 9,...
18, 36, 72, 144, 61, 122, 244, 245, 247, 243, 251, 235, 203, 139, 11,...
22, 44, 88, 176, 125, 250, 233, 207, 131, 27, 54, 108, 216, 173, 71,...
142, 1, 2, 4, 8, 16, 32, 64, 128, 29, 58, 116, 232, 205, 135, 19, 38,...
76, 152, 45, 90, 180, 117, 234, 201, 143, 3, 6, 12, 24, 48, 96, 192,...
157, 39, 78, 156, 37, 74, 148, 53, 106, 212, 181, 119, 238, 193, 159,...
35, 70, 140, 5, 10, 20, 40, 80, 160, 93, 186, 105, 210, 185, 111,...
222, 161, 95, 190, 97, 194, 153, 47, 94, 188, 101, 202, 137, 15, 30,...
60, 120, 240, 253, 231, 211, 187, 107, 214, 177, 127, 254, 225, 223,...
163, 91, 182, 113, 226, 217, 175, 67, 134, 17, 34, 68, 136, 13, 26,...
52, 104, 208, 189, 103, 206, 129, 31, 62, 124, 248, 237, 199, 147,...
59, 118, 236, 197, 151, 51, 102, 204, 133, 23, 46, 92, 184, 109, 218,...
169, 79, 158, 33, 66, 132, 21, 42, 84, 168, 77, 154, 41, 82, 164, 85,...
170, 73, 146, 57, 114, 228, 213, 183, 115, 230, 209, 191, 99, 198,...
145, 63, 126, 252, 229, 215, 179, 123, 246, 241, 255, 227, 219, 171,...
75, 150, 49, 98, 196, 149, 55, 110, 220, 165, 87, 174, 65, 130, 25,...
50, 100, 200, 141, 7, 14, 28, 56, 112, 224, 221, 167, 83, 166, 81,...
162, 89, 178, 121, 242, 249, 239, 195, 155, 43, 86, 172, 69, 138, 9,...
18, 36, 72, 144, 61, 122, 244, 245, 247, 243, 251, 235, 203, 139, 11,...
22, 44, 88, 176, 125, 250, 233, 207, 131, 27, 54, 108, 216, 173, 71,...
142];
p = TABLE( mod( ii, 255) + 1 );
Just use Matlab Persistent Variable:
function p = cal_power( ii )
% Input: ii: Integer in {0,255} (forced to be, if not)
% Output:
% p = TABLE( mod( ii, 255) + 1 );
% TABLE : look-up table
persistent TABLE;
if isempty(TABLE)
TABLE = [1, 2, 4, 8, 16, 32, 64, 128, 29, 58, 116, 232, 205, 135, 19, 38, 76,...
152, 45, 90, 180, 117, 234, 201, 143, 3, 6, 12, 24, 48, 96, 192, 157,...
39, 78, 156, 37, 74, 148, 53, 106, 212, 181, 119, 238, 193, 159, 35,...
70, 140, 5, 10, 20, 40, 80, 160, 93, 186, 105, 210, 185, 111, 222,...
end
UPDATE:
The following Code featuring the persistent variable took 0.034 seconds on my machine compared to 0.039 seconds without persistent:
tic
for i=1:3000
ii = round(rand*254)+1;
p=cal_power(ii);
end
toc
So I don't see that it is slower at all. Especially not by a factor 5.
If you still need a different solution, initialize the TABLE in the beginning of your main-function and just pass it to cal_power as additional parameter.
function p = cal_power( ii , TABLE)
% don't initialize TABLE here anymore
...
end
This way you make sure that TABLE only gets initialized once. Passing the variable as a reference should not consume much computation time too.
In Java, you can flip a BitSet:
val javaBitSet = new java.util.BitSet(5)
javaBitSet.set(0)
javaBitSet.set(1)
javaBitSet.set(3)
javaBitSet.flip(0, 5)
javaBitSet: java.util.BitSet = {2, 4}
How do you idiomatically do this with Scala's BitSet?
(And why isn't this a method?)
scala> import collection.immutable.BitSet
import collection.immutable.BitSet
Your deleted example flips bit 2 but leaves bit 5 off:
scala> BitSet(1,2,3) &~ BitSet(2,5)
res3: scala.collection.immutable.BitSet = BitSet(1, 3)
but logical xor is what you want:
scala> BitSet(1,2,3) ^ BitSet(2,5)
res4: scala.collection.immutable.BitSet = BitSet(1, 3, 5)
or idiomatically
scala> implicit class `flipper of bits`(val bs: BitSet) extends AnyVal {
| def flip(lo: Int, hi: Int): BitSet = BitSet(lo until hi: _*) ^ bs
| }
defined class flipper$u0020of$u0020bits
scala> BitSet(2,5) flip (1,4)
res5: scala.collection.immutable.BitSet = BitSet(1, 3, 5)
for an "extension method." It's not efficient to build a BitSet from a range, since it invokes + n times. A bit better is to select a subrange of a bit set created from an array, as below. These hoops suggest it would be nice to have built-in support for ranged operations, as you requested.
Update:
For non-trivial ranges, better not to create BitSet from a Range.
scala> import collection.immutable.BitSet
import collection.immutable.BitSet
scala> val bs = BitSet(1, 69, 188)
bs: scala.collection.immutable.BitSet = BitSet(1, 69, 188)
scala> val lower = 32
lower: Int = 32
scala> val upper = 190
upper: Int = 190
scala> val n = (bs.last max upper) / 64 + 1
n: Int = 3
scala> val arr = Array.tabulate(n)(_ => -1L)
arr: Array[Long] = Array(-1, -1, -1)
scala> val mask = BitSet.fromBitMaskNoCopy(arr).range(lower, upper)
mask: scala.collection.immutable.BitSet = BitSet(32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189)
scala> val res = bs ^ mask
res: scala.collection.immutable.BitSet = BitSet(1, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 189)
scala> res(69)
res0: Boolean = false