Loss records on spark kafka stream - scala

i loss records in my kafka streams.
My kafka stream is in spark infra with that config.
val df = spark.
readStream.
format("kafka").
option("kafka.bootstrap.servers", broker_address).
option("subscribe", subject).
option("startingOffsets", "latest").
option("failOnDataLoss", "false").
load()
My sink is a parquet files.
df_vertex.writeStream.
format("parquet").
option("checkpointLocation", "/tmp/vertex/check").
option("path", data_location).
option("mode", "append").
trigger(Trigger.ProcessingTime("10 seconds")).
start().
awaitTermination()
When a read my parquets files some records are missing.
That's recors are present in kafka broker but not write on sink.
21/12/04 05:32:50 DEBUG KafkaDataConsumer: Get spark-kafka-source-2edd8854-b64d-4f1a-b130-135e0c2b4c56--578079920-executor rainbow_data_extractor-0 nextOffset 42141 requested $offset
21/12/04 05:32:50 DEBUG RecordConsumerLoggingWrapper: <!-- start message -->
21/12/04 05:32:50 DEBUG MessageColumnIO: < MESSAGE START >
21/12/04 05:32:50 DEBUG MessageColumnIO: 0, VistedIndex{vistedIndexes={}}: [] r:0
21/12/04 05:32:50 DEBUG RecordConsumerLoggingWrapper: <id>
21/12/04 05:32:50 DEBUG MessageColumnIO: startField(id, 0)
21/12/04 05:32:50 DEBUG MessageColumnIO: 0, VistedIndex{vistedIndexes={}}: [id] r:0
21/12/04 05:32:50 DEBUG RecordConsumerLoggingWrapper: [99, 102, 49, 48, 49, 48, 99, 98, 100, 53, 53, 49, 48, 51, 49, 98, 55, 53, 51, 97, 100, 55, 101, 49, 100, 99, 50, 49, 101, 100, 100, 100, 101, 51, 53, 55, 101, 100, 100, 49, 98, 99, 98, 51, 53, 101, 55, 100, 99, 97, 49, 56, 102, 51, 100, 49, 49, 55, 54, 53, 98, 5
5, 48, 56]
21/12/04 05:32:50 DEBUG MessageColumnIO: addBinary(64 bytes)
21/12/04 05:32:50 DEBUG MessageColumnIO: r: 0
21/12/04 05:32:50 DEBUG MessageColumnIO: 0, VistedIndex{vistedIndexes={}}: [id] r:0
21/12/04 05:32:50 DEBUG RecordConsumerLoggingWrapper: </id>
21/12/04 05:32:50 DEBUG MessageColumnIO: endField(id, 0)
21/12/04 05:32:50 DEBUG MessageColumnIO: 0, VistedIndex{vistedIndexes={0}}: [] r:0
21/12/04 05:32:50 DEBUG RecordConsumerLoggingWrapper: <type>
21/12/04 05:32:50 DEBUG MessageColumnIO: startField(type, 1)
21/12/04 05:32:50 DEBUG MessageColumnIO: 0, VistedIndex{vistedIndexes={0}}: [type] r:0
21/12/04 05:32:50 DEBUG RecordConsumerLoggingWrapper: [117, 115, 101, 114]
21/12/04 05:32:50 DEBUG MessageColumnIO: addBinary(4 bytes)
21/12/04 05:32:50 DEBUG MessageColumnIO: r: 0
21/12/04 05:32:50 DEBUG MessageColumnIO: 0, VistedIndex{vistedIndexes={0}}: [type] r:0
21/12/04 05:32:50 DEBUG RecordConsumerLoggingWrapper: </type>
21/12/04 05:32:50 DEBUG MessageColumnIO: endField(type, 1)
21/12/04 05:32:50 DEBUG MessageColumnIO: 0, VistedIndex{vistedIndexes={0, 1}}: [] r:0
21/12/04 05:32:50 DEBUG MessageColumnIO: [created_by].writeNull(0,0)
21/12/04 05:32:50 DEBUG MessageColumnIO: [created_on].writeNull(0,0)
21/12/04 05:32:50 DEBUG MessageColumnIO: [last_message].writeNull(0,0)
21/12/04 05:32:50 DEBUG MessageColumnIO: [is_active].writeNull(0,0)
21/12/04 05:32:50 DEBUG MessageColumnIO: [is_archived].writeNull(0,0)
21/12/04 05:32:50 DEBUG MessageColumnIO: < MESSAGE END >
21/12/04 05:32:50 DEBUG MessageColumnIO: 0, VistedIndex{vistedIndexes={0, 1}}: [] r:0
21/12/04 05:32:50 DEBUG RecordConsumerLoggingWrapper: <!-- end message -->
21/12/04 05:32:50 DEBUG KafkaDataConsumer: Get spark-kafka-source-2edd8854-b64d-4f1a-b130-135e0c2b4c56--578079920-executor rainbow_data_extractor-0 nextOffset 42142 requested $offset
21/12/04 05:32:50 DEBUG KafkaDataConsumer: Get spark-kafka-source-2edd8854-b64d-4f1a-b130-135e0c2b4c56--578079920-executor rainbow_data_extractor-0 nextOffset 42143 requested $offset
21/12/04 05:32:50 DEBUG KafkaDataConsumer: Get spark-kafka-source-2edd8854-b64d-4f1a-b130-135e0c2b4c56--578079920-executor rainbow_data_extractor-0 nextOffset 42144 requested $offset
21/12/04 05:32:50 DEBUG KafkaDataConsumer: Get spark-kafka-source-2edd8854-b64d-4f1a-b130-135e0c2b4c56--578079920-executor rainbow_data_extractor-0 nextOffset 42145 requested $offset
21/12/04 05:32:50 DEBUG KafkaDataConsumer: Get spark-kafka-source-2edd8854-b64d-4f1a-b130-135e0c2b4c56--578079920-executor rainbow_data_extractor-0 nextOffset 42146 requested $offset
On this example i loss records from 42142 to 42146.
All my records are put in kafka broker in a short time.
Perhaps i have some pb in a received scheduling of records.
If someone have the right configuration of stream for my case ?
Thanks
Seb

Related

How ensure that parquet files contains row count in metadata?

Look at the sources: fast-parquet-row-count-in-spark and parquet-count-metadata-explanation
Stackoverflow and official spark documentation tells us that parquet file should contains row count in metadata. And spark added this by default since 1.6
I tried to see this "field" but have no luck. May be I am doing something wrong? Could somebody tell me how ensure that some parquet file has such filed? Any link to small but good parquet file welcome! For now I am invoking org.apache.parquet.tools.Main with arguments meta D:\myparquet_file.parquet and see no count key word in results.
You can inspect a parquet file using parquet-tools:
Install parquet-tools:
pip install parquet-tools
Create a parquet file. I used spark to create a small parquet file with 3 rows:
import spark.implicits._
val df: DataFrame = Seq((1, 2, 3), (4, 5, 6), (7, 8, 9)).toDF("col1", "col2", "col3")
df.coalesce(1).write.parquet("data/")
inspect the parquet file:
parquet-tools inspect /path/to/parquet/file
The output should be something like:
############ file meta data ############
created_by: parquet-mr version 1.10.1 (build a89df8f9932b6ef6633d06069e50c9b7970bebd1)
num_columns: 3
num_rows: 3
num_row_groups: 1
format_version: 1.0
serialized_size: 654
############ Columns ############
col1
col2
col3
############ Column(col1) ############
name: col1
path: col1
max_definition_level: 0
max_repetition_level: 0
physical_type: INT32
logical_type: None
converted_type (legacy): NONE
############ Column(col2) ############
name: col2
path: col2
max_definition_level: 0
max_repetition_level: 0
physical_type: INT32
logical_type: None
converted_type (legacy): NONE
############ Column(col3) ############
name: col3
path: col3
max_definition_level: 0
max_repetition_level: 0
physical_type: INT32
logical_type: None
converted_type (legacy): NONE
You can see under the file meta data section the num_rows field that represent the number of rows in the parquet file.
You can find the row count in the field RC just beside the row group.
row group 1: RC:148192 TS:10503944 OFFSET:4
Full output of parquet-tool with meta option below.
> parquet-tools meta part-00000-fc34f237-c985-4ebc-822b-87fa446f6f70.c000.snappy.parquet
file: file:/Users/matthewropp/team_demo/los-angeles-parking-citations/raw_citations/issue_month=201902/part-00000-fc34f237-c985-4ebc-822b-87fa446f6f70.c000.snappy.parquet
creator: parquet-mr version 1.10.0 (build 031a6654009e3b82020012a18434c582bd74c73a)
extra: org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":":created_at","type":"string","nullable":true,"metadata":{}},{"name":":id","type":"string","nullable":true,"metadata":{}},{"name":":updated_at","type":"string","nullable":true,"metadata":{}},{"name":"agency","type":"integer","nullable":true,"metadata":{}},{"name":"body_style","type":"string","nullable":true,"metadata":{}},{"name":"color","type":"string","nullable":true,"metadata":{}},{"name":"fine_amount","type":"integer","nullable":true,"metadata":{}},{"name":"issue_date","type":"date","nullable":true,"metadata":{}},{"name":"issue_time","type":"integer","nullable":true,"metadata":{}},{"name":"latitude","type":"decimal(8,1)","nullable":true,"metadata":{}},{"name":"location","type":"string","nullable":true,"metadata":{}},{"name":"longitude","type":"decimal(8,1)","nullable":true,"metadata":{}},{"name":"make","type":"string","nullable":true,"metadata":{}},{"name":"marked_time","type":"string","nullable":true,"metadata":{}},{"name":"meter_id","type":"string","nullable":true,"metadata":{}},{"name":"plate_expiry_date","type":"date","nullable":true,"metadata":{}},{"name":"route","type":"string","nullable":true,"metadata":{}},{"name":"rp_state_plate","type":"string","nullable":true,"metadata":{}},{"name":"ticket_number","type":"string","nullable":false,"metadata":{}},{"name":"vin","type":"string","nullable":true,"metadata":{}},{"name":"violation_code","type":"string","nullable":true,"metadata":{}},{"name":"violation_description","type":"string","nullable":true,"metadata":{}}]}
file schema: spark_schema
--------------------------------------------------------------------------------
: created_at: OPTIONAL BINARY O:UTF8 R:0 D:1
: id: OPTIONAL BINARY O:UTF8 R:0 D:1
: updated_at: OPTIONAL BINARY O:UTF8 R:0 D:1
agency: OPTIONAL INT32 R:0 D:1
body_style: OPTIONAL BINARY O:UTF8 R:0 D:1
color: OPTIONAL BINARY O:UTF8 R:0 D:1
fine_amount: OPTIONAL INT32 R:0 D:1
issue_date: OPTIONAL INT32 O:DATE R:0 D:1
issue_time: OPTIONAL INT32 R:0 D:1
latitude: OPTIONAL INT32 O:DECIMAL R:0 D:1
location: OPTIONAL BINARY O:UTF8 R:0 D:1
longitude: OPTIONAL INT32 O:DECIMAL R:0 D:1
make: OPTIONAL BINARY O:UTF8 R:0 D:1
marked_time: OPTIONAL BINARY O:UTF8 R:0 D:1
meter_id: OPTIONAL BINARY O:UTF8 R:0 D:1
plate_expiry_date: OPTIONAL INT32 O:DATE R:0 D:1
route: OPTIONAL BINARY O:UTF8 R:0 D:1
rp_state_plate: OPTIONAL BINARY O:UTF8 R:0 D:1
ticket_number: REQUIRED BINARY O:UTF8 R:0 D:0
vin: OPTIONAL BINARY O:UTF8 R:0 D:1
violation_code: OPTIONAL BINARY O:UTF8 R:0 D:1
violation_description: OPTIONAL BINARY O:UTF8 R:0 D:1
row group 1: RC:148192 TS:10503944 OFFSET:4
--------------------------------------------------------------------------------
: created_at: BINARY SNAPPY DO:0 FPO:4 SZ:607/616/1.01 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 2019-02-28T00:16:06.329Z, max: 2019-03-02T00:20:00.249Z, num_nulls: 0]
: id: BINARY SNAPPY DO:0 FPO:611 SZ:2365472/3260525/1.38 VC:148192 ENC:BIT_PACKED,PLAIN,RLE ST:[min: row-2229_y75z.ftdu, max: row-zzzs_4hta.8fub, num_nulls: 0]
: updated_at: BINARY SNAPPY DO:0 FPO:2366083 SZ:602/611/1.01 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 2019-02-28T00:16:06.329Z, max: 2019-03-02T00:20:00.249Z, num_nulls: 0]
agency: INT32 SNAPPY DO:0 FPO:2366685 SZ:4871/5267/1.08 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 1, max: 58, num_nulls: 0]
body_style: BINARY SNAPPY DO:0 FPO:2371556 SZ:36244/61827/1.71 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: , max: WR, num_nulls: 0]
color: BINARY SNAPPY DO:0 FPO:2407800 SZ:111267/111708/1.00 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: , max: YL, num_nulls: 0]
fine_amount: INT32 SNAPPY DO:0 FPO:2519067 SZ:71989/82138/1.14 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 25, max: 363, num_nulls: 63]
issue_date: INT32 SNAPPY DO:0 FPO:2591056 SZ:20872/23185/1.11 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 2019-02-01, max: 2019-02-27, num_nulls: 0]
issue_time: INT32 SNAPPY DO:0 FPO:2611928 SZ:210026/210013/1.00 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 1, max: 2359, num_nulls: 41]
latitude: INT32 SNAPPY DO:0 FPO:2821954 SZ:508049/512228/1.01 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 99999.0, max: 6513161.2, num_nulls: 0]
location: BINARY SNAPPY DO:0 FPO:3330003 SZ:1251364/2693435/2.15 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,PLAIN,RLE ST:[min: , max: ZOMBAR/VALERIO, num_nulls: 0]
longitude: INT32 SNAPPY DO:0 FPO:4581367 SZ:516233/520692/1.01 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 99999.0, max: 1941557.4, num_nulls: 0]
make: BINARY SNAPPY DO:0 FPO:5097600 SZ:147034/150364/1.02 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: , max: YAMA, num_nulls: 0]
marked_time: BINARY SNAPPY DO:0 FPO:5244634 SZ:11675/17658/1.51 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: , max: 959.0, num_nulls: 0]
meter_id: BINARY SNAPPY DO:0 FPO:5256309 SZ:172432/256692/1.49 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: , max: YO97, num_nulls: 0]
plate_expiry_date: INT32 SNAPPY DO:0 FPO:5428741 SZ:149849/152288/1.02 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 2000-02-01, max: 2099-12-01, num_nulls: 18624]
route: BINARY SNAPPY DO:0 FPO:5578590 SZ:38377/45948/1.20 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: , max: WTD, num_nulls: 0]
rp_state_plate: BINARY SNAPPY DO:0 FPO:5616967 SZ:33281/60186/1.81 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: AB, max: XX, num_nulls: 0]
ticket_number: BINARY SNAPPY DO:0 FPO:5650248 SZ:801039/2074791/2.59 VC:148192 ENC:BIT_PACKED,PLAIN ST:[min: 1020798376, max: 4350802142, num_nulls: 0]
vin: BINARY SNAPPY DO:0 FPO:6451287 SZ:64/60/0.94 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: , max: , num_nulls: 0]
violation_code: BINARY SNAPPY DO:0 FPO:6451351 SZ:94784/131071/1.38 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: 000, max: 8942, num_nulls: 0]
violation_description: BINARY SNAPPY DO:0 FPO:6546135 SZ:95937/132641/1.38 VC:148192 ENC:BIT_PACKED,PLAIN_DICTIONARY,RLE ST:[min: , max: YELLOW ZONE, num_nulls: 0]
> parquet-tools dump -m -c make part-00000-fc34f237-c985-4ebc-822b-87fa446f6f70.c000.snappy.parquet | head -20
BINARY make
--------------------------------------------------------------------------------
*** row group 1 of 1, values 1 to 148192 ***
value 1: R:0 D:1 V:HYDA
value 2: R:0 D:1 V:NISS
value 3: R:0 D:1 V:NISS
value 4: R:0 D:1 V:TOYO
value 5: R:0 D:1 V:AUDI
value 6: R:0 D:1 V:MERC
value 7: R:0 D:1 V:LEX
value 8: R:0 D:1 V:BMW
value 9: R:0 D:1 V:GMC
value 10: R:0 D:1 V:HOND
value 11: R:0 D:1 V:TOYO
value 12: R:0 D:1 V:NISS
value 13: R:0 D:1 V:
value 14: R:0 D:1 V:THOR
value 15: R:0 D:1 V:DODG
value 16: R:0 D:1 V:DODG
value 17: R:0 D:1 V:HOND

1/3 OSD down in Ceph Cluster after 95% of the storage consume

I'm new to Ceph technology so I may not know obvious stuff. I started deploying ceph cluster using cephadm and I did. In my first attempt I gave each node 3 GB RAM (After some time I figured out that it needs more). My cluster hanged when one of the node's ram and swap filled up to 100%. Now I give each node 8GB ram and 10GB SSD swap and It's fixed:
Node01:
4x CPU , 8GB RAM, 60GB SSD
Node02:
4x CPU , 6GB RAM , 60GB SSD
Node04:
4x CPU , 8GB RAM , 60GB HDD
I start using it by creating a CephFS (It creates 2 pools one for data and one for metadata (3x replica rule)). I mount this FS on an Ubuntu 20.04 using ceph-common:
>> vim /etc/fstab
...
ceph-n01,ceph-n02:/ /ceph ceph _netdev,name=amin,secretfile=/home/amin/.ceph 0 0
It works fine. I use this fs by running a service that render a map and save the tiles in the filesystem (my CephFS pool). It works for about 1 day and a half and generates ~56.65GB file). On the second day I saw that 1 OSD (OSD with HDD) and only two OSDs running.
I checked RAM and CPU status for the 3 nodes. In 2 nodes 50% of the RAM was used and in one node (node 01) 85% of the RAM was used with ~4GB swap. I tried to fix the issue by restarting the OSD. the OSD which was down kept crashing when I restarted them. (the OSDs which was running before, started successfully after restart.
I looked at OSD logs:
debug -11> 2022-01-06T12:43:01.620+0000 7feaa5390080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1641472981624364, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 3219, "file_size": 6014, "table_properties": {"data_size": 4653, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 453, "raw_key_size": 2663, "raw_average_key_size": 17, "raw_value_size": 2408, "raw_average_value_size": 16, "num_data_blocks": 2, "num_entries": 148, "num_deletions": 0, "num_merge_operands": 147, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1641472981, "oldest_key_time": 3, "file_creation_time": 0}}
debug -10> 2022-01-06T12:43:01.652+0000 7feaa5390080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1641472981656222, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 3220, "file_size": 5104328, "table_properties": {"data_size": 4982595, "index_size": 70910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 49989, "raw_key_size": 973446, "raw_average_key_size": 48, "raw_value_size": 4492366, "raw_average_value_size": 224, "num_data_blocks": 1298, "num_entries": 19980, "num_deletions": 10845, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1641472981, "oldest_key_time": 3, "file_creation_time": 0}}
debug -9> 2022-01-06T12:43:01.688+0000 7feaa5390080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1641472981692873, "cf_name": "p-2", "job": 1, "event": "table_file_creation", "file_number": 3221, "file_size": 5840600, "table_properties": {"data_size": 5701923, "index_size": 81198, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 56645, "raw_key_size": 1103994, "raw_average_key_size": 48, "raw_value_size": 5146222, "raw_average_value_size": 227, "num_data_blocks": 1485, "num_entries": 22623, "num_deletions": 12166, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "p-2", "column_family_id": 6, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1641472981, "oldest_key_time": 3, "file_creation_time": 0}}
debug -8> 2022-01-06T12:43:01.688+0000 7feaa5390080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1641472981694121, "cf_name": "O-0", "job": 1, "event": "table_file_creation", "file_number": 3222, "file_size": 73885, "table_properties": {"data_size": 72021, "index_size": 588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 453, "raw_key_size": 9444, "raw_average_key_size": 60, "raw_value_size": 63028, "raw_average_value_size": 406, "num_data_blocks": 18, "num_entries": 155, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "O-0", "column_family_id": 7, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1641472981, "oldest_key_time": 3, "file_creation_time": 0}}
debug -7> 2022-01-06T12:43:01.688+0000 7feaa5390080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1641472981695243, "cf_name": "O-1", "job": 1, "event": "table_file_creation", "file_number": 3223, "file_size": 71023, "table_properties": {"data_size": 69158, "index_size": 589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 453, "raw_key_size": 9028, "raw_average_key_size": 61, "raw_value_size": 60508, "raw_average_value_size": 408, "num_data_blocks": 18, "num_entries": 148, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "O-1", "column_family_id": 8, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1641472981, "oldest_key_time": 3, "file_creation_time": 0}}
debug -6> 2022-01-06T12:43:01.692+0000 7feaa5390080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1641472981696397, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 3224, "file_size": 75263, "table_properties": {"data_size": 73370, "index_size": 617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 453, "raw_key_size": 9679, "raw_average_key_size": 60, "raw_value_size": 64238, "raw_average_value_size": 404, "num_data_blocks": 19, "num_entries": 159, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1641472981, "oldest_key_time": 3, "file_creation_time": 0}}
debug -5> 2022-01-06T12:43:01.696+0000 7feaa5390080 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1641472981700113, "cf_name": "L", "job": 1, "event": "table_file_creation", "file_number": 3225, "file_size": 338198, "table_properties": {"data_size": 335953, "index_size": 1100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 325, "raw_key_size": 1712, "raw_average_key_size": 16, "raw_value_size": 333803, "raw_average_value_size": 3119, "num_data_blocks": 39, "num_entries": 107, "num_deletions": 68, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "L", "column_family_id": 10, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1641472981, "oldest_key_time": 3, "file_creation_time": 0}}
debug -4> 2022-01-06T12:43:01.784+0000 7feaa5390080 1 bluefs _allocate unable to allocate 0x80000 on bdev 1, allocator name block, allocator type bitmap, capacity 0xeffc00000, block size 0x1000, free 0x10c7c2000, fragmentation 1, allocated 0x0
debug -3> 2022-01-06T12:43:01.784+0000 7feaa5390080 -1 bluefs _allocate allocation failed, needed 0x7b2e8
debug -2> 2022-01-06T12:43:01.784+0000 7feaa5390080 -1 bluefs _flush_range allocated: 0xc90000 offset: 0xc8a944 length: 0x809a4
debug -1> 2022-01-06T12:43:01.792+0000 7feaa5390080 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.6/rpm/el8/BUILD/ceph-16.2.6/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range(BlueFS::FileWriter*, uint64_t, uint64_t)' thread 7feaa5390080 time 2022-01-06T12:43:01.789216+0000
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.6/rpm/el8/BUILD/ceph-16.2.6/src/os/bluestore/BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xe5) [0x558699f6ac8c]
2: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned long)+0x1131) [0x55869a661901]
3: (BlueFS::_flush(BlueFS::FileWriter*, bool, bool*)+0x90) [0x55869a661be0]
4: (BlueFS::_flush(BlueFS::FileWriter*, bool, std::unique_lock<std::mutex>&)+0x32) [0x55869a672cf2]
5: (BlueRocksWritableFile::Append(rocksdb::Slice const&)+0x11b) [0x55869a68b32b]
6: (rocksdb::LegacyWritableFileWrapper::Append(rocksdb::Slice const&, rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x1f) [0x55869ab1dacf]
7: (rocksdb::WritableFileWriter::WriteBuffered(char const*, unsigned long)+0x58a) [0x55869ac2f81a]
8: (rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x2d0) [0x55869ac30c70]
9: (rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0xb6) [0x55869ad4c416]
10: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice const&, rocksdb::BlockHandle*, bool)+0x26c) [0x55869ad4cd5c]
11: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*, rocksdb::BlockHandle*, bool)+0x3c) [0x55869ad4d47c]
12: (rocksdb::BlockBasedTableBuilder::Flush()+0x6d) [0x55869ad4d50d]
13: (rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Slice const&)+0x2b8) [0x55869ad50978]
14: (rocksdb::BuildTable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::Env*, rocksdb::FileSystem*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::FileOptions const&, rocksdb::TableCache*, rocksdb::InternalIteratorBase<rocksdb::Slice>*, std::vector<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator, std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> >, std::allocator<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator, std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> > > >, rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&, std::vector<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> >, std::allocator<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> > > > const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned long, std::allocator<unsigned long> >, unsigned long, rocksdb::SnapshotChecker*, rocksdb::CompressionType, unsigned long, rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*, rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int, rocksdb::Env::IOPriority, rocksdb::TableProperties*, int, unsigned long, unsigned long, rocksdb::Env::WriteLifeTimeHint, unsigned long)+0xa45) [0x55869acfb3d5]
15: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0xcf5) [0x55869ab60415]
16: (rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long, std::allocator<unsigned long> > const&, unsigned long*, bool, bool*)+0x1c2e) [0x55869ab62b4e]
17: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*)+0xae8) [0x55869ab63ea8]
18: (rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool)+0x59d) [0x55869ab5dbcd]
19: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0x15) [0x55869ab5ef65]
20: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x10c1) [0x55869aad6ec1]
21: (BlueStore::_open_db(bool, bool, bool)+0x948) [0x55869a55c4d8]
22: (BlueStore::_open_db_and_around(bool, bool)+0x2f7) [0x55869a5c6657]
23: (BlueStore::_mount()+0x204) [0x55869a5c9514]
24: (OSD::init()+0x380) [0x55869a09ea10]
25: main()
26: __libc_start_main()
27: _start()
debug 0> 2022-01-06T12:43:01.796+0000 7feaa5390080 -1 *** Caught signal (Aborted) **
in thread 7feaa5390080 thread_name:ceph-osd
the above is the log in the down OSD.
I start reading it and find a useful log to search in google:
bluefs _allocate unable to allocate 0x80000 on bdev 1, allocator name block, allocator type bitmap, capacity 0xeffc00000, block size 0x1000, free 0x10c7c2000, fragmentation 1, allocated 0x0
I found a bug which was related to version 16.2.1 (I use 16.2.6):
https://tracker.ceph.com/issues/50656
I wanted to get a dump from my OSD (I dont understand completly what they say):
ceph daemon osd.1 bluestore allocator dump block
Can't get admin socket path: unable to get conf option admin_socket for osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid types are: auth, mon, osd, mds, mgr, client\n"
I deploy cluster using cephadm which uses containers so I can not access socket in this fashion I think. this command leads me to use ceph-bluestore-tool to see state of my physical disk (see capacity, run fsck or repair) but running ceph-bluestore-tool needs to specify the --path of the osd which I cant run it from host (my containers keep crashing so I can't run this command inside the container). I tried to run a command in osd container using cephadm but can't find anyway to do this.
If you need the full log here tell me (I could not send it due to char limit) but it was the same crash log
I dont really understand whats going on.
I tried to use ceph-volume to mount the block device to the host to use ceph-bluestore-tool on it to use fsck or repair on it. (it needs a --path argument to point to the osd files) (I dont even know if it is the correct to use ceph-volume this way or it's built for this - as I told I am new to Ceph)
I tried to use cephadm to run ceph-bluestore-tool commands in crashed OSDs but I couldn't.
(the socket error I mentiond above)
my SSD OSDs was filled up 94% so the other still have free space on them (as I guess).
the only lead that I could find on the internet was not working.
I'm really desperate to find the answer. I will be happy if you can help me. even tell me to read a document or learn something.
I WILL POST SOME INFORMATION ABOUT MY CLUSTER DOWN HERE:
[The Ceph Dashboard][1]
>> ceph -s
cluster:
id: 1ad06d18-3e72-11ec-8684-fd37cdad1703
health: HEALTH_WARN
mons ceph-n01,ceph-n02,ceph-n04 are low on available space
2 backfillfull osd(s)
Degraded data redundancy: 4282822/12848466 objects degraded (33.333%), 64 pgs degraded, 96 pgs undersized
3 pool(s) backfillfull
6 daemons have recently crashed
services:
mon: 3 daemons, quorum ceph-n01,ceph-n04,ceph-n02 (age 7h)
mgr: ceph-n02.xyrntr(active, since 4w), standbys: ceph-n04.srrvqt
mds: 1/1 daemons up, 1 standby
osd: 3 osds: 2 up (since 6h), 2 in (since 6h)
data:
volumes: 1/1 healthy
pools: 3 pools, 96 pgs
objects: 4.28M objects, 42 GiB
usage: 113 GiB used, 6.7 GiB / 120 GiB avail
pgs: 4282822/12848466 objects degraded (33.333%)
64 active+undersized+degraded
32 active+undersized
>> ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 4m ago 8w count:1
crash 3/3 6m ago 8w *
grafana ?:3000 1/1 4m ago 8w count:1
mds.cephfs 2/2 6m ago 7w label:mds
mgr 2/2 6m ago 8w label:mgr
mon 3/5 6m ago 3w count:5
node-exporter ?:9100 3/3 6m ago 8w *
osd 3/3 6m ago - <unmanaged>
prometheus ?:9095 1/1 4m ago 8w count:1
>> ceph orch host ls
HOST ADDR LABELS STATUS
ceph-n01 192.168.2.20 _admin mon mds
ceph-n02 192.168.2.21 mon mgr
ceph-n04 192.168.2.23 _admin mon mds mgr
>> ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17578 root default
-3 0.05859 host ceph-n01
1 ssd 0.05859 osd.1 up 1.00000 1.00000
-5 0.05859 host ceph-n02
0 ssd 0.05859 osd.0 up 1.00000 1.00000
-10 0.05859 host ceph-n04
2 hdd 0.05859 osd.2 down 0 1.00000
>> ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
1 ssd 0.05859 1.00000 60 GiB 57 GiB 50 GiB 2.1 GiB 4.6 GiB 3.4 GiB 94.37 1.00 96 up
0 ssd 0.05859 1.00000 60 GiB 57 GiB 50 GiB 2.1 GiB 4.6 GiB 3.4 GiB 94.41 1.00 96 up
2 hdd 0.05859 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 down
TOTAL 120 GiB 113 GiB 100 GiB 4.3 GiB 9.1 GiB 6.7 GiB 94.39
MIN/MAX VAR: 1.00/1.00 STDDEV: 0.02
node01: >> free -m
total used free shared buff/cache available
Mem: 7956 7121 175 0 659 573
Swap: 10238 3748 6490
node02: >> free -m
total used free shared buff/cache available
Mem: 7956 7121 175 0 659 573
Swap: 10238 3748 6490
node04: >> free -m
total used free shared buff/cache available
Mem: 7922 2260 3970 1 1690 5371
Swap: 10238 642 9596

How to get rid of Distributed Denial of Service or mining happening on my Linux box

Some strange processing running and eating up all resource.
I'd killed it many times but still its coming up and starting again.
Highly appreciate your help!
Here is the pstree output for reference:
$ pstree -s
init─┬─-bash
├─-bash───1023*[{-bash}]
├─agetty
├─atd
├─auditd───{auditd}
├─crond
├─dbus-daemon
├─dhclient
├─irqbalance
├─java───311*[{java}]
├─java───81*[{java}]
├─java───55*[{java}]
├─6*[mingetty]
├─ntpd
├─rngd
├─rpc.statd
├─rpcbind
├─rsyslogd───3*[{rsyslogd}]
├─2*[sendmail]
├─sensu-client───{sensu-client}
├─sshd─┬─sshd───sshd───bash───sudo───su───bash
│ ├─2*[sshd───sshd───bash]
│ ├─sshd───sshd───bash───sudo───su───bash───tail
│ └─sshd───sshd───bash───pstree
├─udevd───2*[udevd]
└─yMPzpi───9*[{yMPzpi}]
yMPzpi : This is one one which is starting again with different name even after killing it.
Here strace output for reference:
$ strace -p 294891
Process 294891 attached
restart_syscall(<... resuming interrupted call ...>) = 1
poll([{fd=4, events=POLLIN}], 1, 300000) = 1 ([{fd=4, revents=POLLIN}])
clock_gettime(CLOCK_REALTIME, {1540834737, 507976501}) = 0
poll([{fd=4, events=POLLIN}], 1, 60000) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, "{\"id\":1,\"jsonrpc\":\"2.0\",\"error\":"..., 2044, 0, NULL, NULL) = 63
clock_gettime(CLOCK_REALTIME, {1540834737, 508379955}) = 0
poll([{fd=4, events=POLLIN}], 1, 90000) = 1 ([{fd=4, revents=POLLIN}])
poll([{fd=4, events=POLLIN}], 1, 300000) = 1 ([{fd=4, revents=POLLIN}])
clock_gettime(CLOCK_REALTIME, {1540834778, 177325012}) = 0
poll([{fd=4, events=POLLIN}], 1, 60000) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, "{\"jsonrpc\":\"2.0\",\"method\":\"job\","..., 2044, 0, NULL, NULL) = 253
clock_gettime(CLOCK_REALTIME, {1540834778, 177412756}) = 0
clock_gettime(CLOCK_REALTIME, {1540834778, 177475706}) = 0
poll([{fd=4, events=POLLIN}], 1, 90000) = 1 ([{fd=4, revents=POLLIN}])
poll([{fd=4, events=POLLIN}], 1, 300000) = 1 ([{fd=4, revents=POLLIN}])
clock_gettime(CLOCK_REALTIME, {1540834846, 399415744}) = 0
poll([{fd=4, events=POLLIN}], 1, 60000) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, "{\"jsonrpc\":\"2.0\",\"method\":\"job\","..., 2044, 0, NULL, NULL) = 253
clock_gettime(CLOCK_REALTIME, {1540834846, 399486835}) = 0
clock_gettime(CLOCK_REALTIME, {1540834846, 399551309}) = 0
poll([{fd=4, events=POLLIN}], 1, 90000
^CProcess 294891 detached
<detached ...>

I have a matrix 12*4, and I need to subtract the 3rd column elements of rows that are different

I have a 12x4 matrix in MATLAB,
A =[-1, 3, 152, 41.5 ;
3, 9, 152, 38.7 ;
9, 16, 152, 38.7 ;
16, 23, 129, 53.5 ;
23, 29, 129, 53.5 ;
29, 30, 100, 100 ;
30, 30.5, 83, 83 ;
30.5, 31, 83, 83 ;
31, 35, 83, 83 ;
35, 41, 129, 53.5 ;
41, 48, 129, 53.5 ;
48, 55, 152, 38.7 ] ;
and I need to find the changes in the rows by subtracting the 3rd column element of the 2nd row from the previous row 3rd column element if they are different else go to the 3rd row if the same.
The answer should be in the form:
B = [16, 23;
29, 29;
30, 17;
35, 46;
48, 23]
For example, the 3rd and the 4th row 3rd column elements are different, so if subtracted i got 23. Output B 1st column element will consist of the 4th row first column element.
%Given matrix
A =[-1, 3, 152, 41.5 ;
3, 9, 152, 38.7 ;
9, 16, 152, 38.7 ;
16, 23, 129, 53.5 ;
23, 29, 129, 53.5 ;
29, 30, 100, 100 ;
30, 30.5, 83, 83 ;
30.5, 31, 83, 83 ;
31, 35, 83, 83 ;
35, 41, 129, 53.5 ;
41, 48, 129, 53.5 ;
48, 55, 152, 38.7 ] ;
B=A(:,2:3); %Taking out the columns of our interest
B = B([diff(B(:,2))~=0; true],:); %Storing only those rows whose consecutive elements in the third column of A are different
B=[B(1:end-1,1) abs(diff(B(:,2)))] % First column is according to your condition and second column is the difference

Create an new DenseMatrix from an submatrix in Breeze using Scala

I've a DenseMatrix (original). I slice it to remove the last column (subset). After that I want to access the data in the subset. However, subset.data still points to the data in the old DenseMatrix (original). Any idea what I'm missing here and how to fix this ?
original: breeze.linalg.DenseMatrix[Int] =
1 200 3 0
10 201 4 0
111 200 0 100
150 195 0 160
200 190 0 150
scala> val numcols = original.cols
numcols: Int = 4
scala> val subset = original(::, 0 to numcols - 2)
subset: breeze.linalg.DenseMatrix[Int] =
1 200 3
10 201 4
111 200 0
150 195 0
200 190 0
scala> subset.data
res0: Array[Int] = Array(1, 10, 111, 150, 200, 200, 201, 200, 195, 190, 3, 4, 0, 0, 0, 0, 0, 100, 160, 150)
scala> subset.data.size
res1: Int = 20
Never mind I figured out one way of doing it.
by using the following
scala> subset.toDenseMatrix.data
res10: Array[Int] = Array(1, 10, 111, 150, 200, 200, 201, 200, 195, 190, 3, 4, 0, 0, 0)
scala> subset.toDenseMatrix.data.size
res11: Int = 15