I’m trying to use Segmentation in OTB with different settings to get optimal segments for a classification. So I’ve already have two .shp processed by “Segmentation”. However now it isn’t working anymore. I think it’s because of the memory capacity?
I’m using QGIS 3.22.9, Python 3.9.5, GDAL 3.51 and OTB 8.0.1. Could anybody check the protocol and please help me?
I tried to uninstall the plugin but it's not working because the plugin is a "core-expansion"
Eingabeparameter:
{ ‘in’ : ‘E:/IP/Daten/DOP10/DOP10_gesamt.tif’, ‘filter’ : ‘meanshift’, ‘filter.meanshift.spatialr’ : 5, ‘filter.meanshift.ranger’ : 15, ‘filter.meanshift.thres’ : 0.1, ‘filter.meanshift.maxiter’ : 100, ‘filter.meanshift.minsize’ : 2500, ‘mode’ : ‘vector’, ‘mode.vector.out’ : ‘E:/IP/Daten/shp/segmentation_CIR_minimumregionsize2500.shp’, ‘mode.vector.outmode’ : ‘ulco’, ‘mode.vector.inmask’ : None, ‘mode.vector.neighbor’ : True, ‘mode.vector.stitch’ : True, ‘mode.vector.minsize’ : 1, ‘mode.vector.simplify’ : 0.1, ‘mode.vector.layername’ : ‘’, ‘mode.vector.fieldname’ : ‘’, ‘mode.vector.tilesize’ : 1024, ‘mode.vector.startlabel’ : 1, ‘mode.vector.ogroptions’ : ‘’, ‘outputpixeltype’ : 5 }
ERROR 1: Error in psSHP->sHooks.FWrite() while writing object of 6760 bytes to .shp file: No error
ERROR 1: Failure writing DBF record 14496.
ERROR 1: Failure writing .shp header: No error
2022-08-22 20:04:22 (INFO) Segmentation: Default RAM limit for OTB is 256 MB
2022-08-22 20:04:22 (INFO) Segmentation: GDAL maximum cache size is 405 MB
2022-08-22 20:04:22 (INFO) Segmentation: OTB will use at most 4 threads
2022-08-22 20:04:22 (INFO): Loading metadata from official product
2022-08-22 20:04:22 (INFO) Segmentation: Use threaded Mean-shift segmentation.
2022-08-22 20:04:22 (INFO) Segmentation: Use 8 connected neighborhood.
2022-08-22 20:04:22 (INFO) Segmentation: Simplify the geometry.
2022-08-22 20:04:22 (INFO) Segmentation: Large scale segmentation mode which output vector data
2022-08-22 20:04:22 (INFO): Estimation will be performed in 400 blocks of 1024x1024 pixels
2022-08-23 08:13:50 (FATAL) Segmentation: itk::ERROR: Cannot create a new feature in the layer <segmentation_CIR_minimumregionsize2500>: Error in psSHP->sHooks.FWrite() while writing object of 6760 bytes to .shp file: No error
Execution completed in 43770.33 Sekunden (12 Stunden 9 Minuten 30 Sekunden)
Ergebnisse:
{‘mode.vector.out’: ‘E:/IP/Daten/shp/segmentation_CIR_minimumregionsize2500.shp’}
Related
I am trying to use zbarcam as a qr code reader.
Build System: Buildroot
Hardware: NXP iMX8MM custom board.
Camera: Omnivision camera
Camera driver was ported successfully and I can run it successfully. I am getting the following error message when I am running the zbarcam app.
Can someone please help me with how to proceed with debugging ?
# zbarcam --verbose=9
_zbar_video_open: opened camera dunknown pixelformat:' '
evice /dev/video0 (fd=5)
_zbar_vmx6s-csi 32e20000.csi1_bridge: Fourcc format (0x00000000) invalid.
unknown pixelformat:' '
4l2_probe: i.MX6S_CSI on platformmx6s-csi 32e20000.csi1_bridge: Fourcc format (0x00000000) invalid.
:32e20000.csi1_bridge driver mx6s-csi (version 5.4.24)
_zbar_v4l2_probe: capabilities: CAPTURE READWRITE STREAMING
v4l2_reset_crop: crop bounds: 0 x 0 # (0, 0)
v4l2_reset_crop: current crop win: 0 x 0 # (0, 0) aspect 1 / 1
v4l2_probe_formats: enumerating supported formats:
v4l2_probe_formats: [0] RGB3 : RGB3 EMULATED
v4l2_probe_formats: [1] BGR3 : BGR3 EMULATED
v4l2_probe_formats: [2] YU12 : YU12 EMULATED
v4l2_probe_formats: [3] YV12 : YV12 EMULATED
v4l2_probe_formats: Max supported size: 0 x 0
v4l2_probe_formats: Found 0 formats and 4 emulated formats.
v4l2_probe_formats: current format: (00000000) 0 x 0 INTERLACED (line=0x0 size=0x0)
v4l2_probe_formats: setting requested size: 40960 x 30720
v4l2_probe_formats: set FAILED...trying to recover original format
v4l2_probe_formats: final format: (00000000) 0 x 0 INTERLACED (line=0x0 size=0x0)
WARNING: zbar video in v4l2_probe_iomode():
system error: USERPTR failed. Falling back to mmap: Invalid argument (22)
_zbar_v4l2_probe: using I/O mode: MMAP
ERROR: zbar processor in _zbar_processor_open():
unsupported request: not compiled with output window support
ERROR: zbar processor in zbar_processor_init():
system error: spawning input thread: Invalid argument (22)
ERROR: zbar processor in zbar_processor_init():
system error: spawning input thread: Invalid argument (22)
Thanks,
Avijit
I block on my problem that I will wrote in details below. During 3 days I tried a lot of differents things, none worked..
If anyone have an idea of what to do !
Here is my message error :
the call to "ft_selectdata" took 0 seconds
preprocessing
Out of memory. Type "help memory" for your options.
Error in ft_preproc_dftfilter (line 187)
tmp = exp(2*1i*pi*freqs(:)*time); % complex sin and cos
Error in ft_preproc_dftfilter (line 144)
filt = ft_preproc_dftfilter(filt, Fs, Fl(i), 'dftreplace', dftreplace, 'dftbandwidth',
dftbandwidth(i), 'dftneighbourwidth', dftneighbourwidth(i)); % enumerate all options
Error in preproc (line 464)
dat = ft_preproc_dftfilter(dat, fsample, cfg.dftfreq, optarg{:});
Error in ft_preprocessing (line 375)
[dataout.trial{i}, dataout.label, dataout.time{i}, cfg] = preproc(data.trial{i}, data.label,
data.time{i}, cfg, begpadding, endpadding);
Error in EEG_Prosocial_script (line 101)
data_intpl = ft_preprocessing(cfg, allData_preprosses);
187 tmp = exp(2*1i*pi*freqs(:)*time); % complex sin and cos
There is some informations from matlab about my computer and about the caracteristics of the calcul
Maximum possible array: 7406 MB (7.766e+09 bytes)*
Memory available for all arrays: 7406 MB (7.766e+09 bytes) *
Memory used by MATLAB: 4195 MB (4.398e+09 bytes)
Physical Memory (RAM): 12206 MB (1.280e+10 bytes)
Limited by System Memory (physical + swap file) available.
K>> whos Name
Size Bytes Class Attributes
freqs 7753x1 62024 double
li 1x1 16 double complex
time 1x1984512 15876096 double
So there the config of the computer which failed to run the script (Alienware aurora R4) :
Ram : 4gb free / 12 # 1,6Ghz --> 2x (4Gb 1600Mhz) - 2x (2Gb 1600 MHz)
Intel core i7-3820 4 core 8 threads 3,7 GHz 1 CPU
NVIDIA GeForce GTX 690 2gb
RAM : Kingston KVT8FP HYC
Hard disk : SSD kingston 250Go SATA 3"
This code work on this computer (Dell inspiron 14-500) : config
Ram 4 Go of memory DDR4 2 666 MHz (4 Go x 1)
Intel® Core™ i5-8265U 8e generatio, (6 Mo memory, 3,9 GHz)
Intel® UHD Graphics 620
Hard disk SATA 2,5" 500 Go 5 400 tr/min
Thank you
Kind regards,
by doing freqs(:)*time you are trying to create a 7753*1984512 size array, and you dont have memory for that... (you will need ~123 gigabytes for that, or 1.2309e+11 bytes, where your computer has ~7e+09)
see for example the case for :
f=rand(7,1);
t=rand(1,19);
size(f(:)*t)
ans =
7 19
what you want do to is probably a for loop per time element etc.
I want to understand how kafka consumer test works and how to interpret some of numbers reported,
below is the test i ran and the output i got. My questions are
values reported for rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec are 1593109326098, -1593108732333, -0.0003, -0.2800; can you explain how it can report such a high and negative numbers ? they dont make sense to me.
Everything reported from Metric Name Value line is reported due to --print-metrics flag. What is difference between metrics reported by default and with this flag? how they are calculated and where can i read about what do they mean?
No matter i scale total consumer running in parallel or scale network and io threads at broker, consumer-fetch-manager-metrics:fetch-latency-avg metrics remains almost same. can you explain this? with more consumers pulling data fetch latency should go higher; similarly for given consuming rate if i reduce io and network thread parameters at broker shouldnt latency scale higher?
here is the command i ran
[root#oak-clx17 kafka_2.12-2.5.0]# bin/kafka-consumer-perf-test.sh --topic topic_test8_cons_test1 --threads 1 --broker-list clx20:9092 --messages 500000000 --consumer.config config/consumer.properties --print-metrics
and results
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms,fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
WARNING: Exiting before consuming the expected number of messages: timeout (10000 ms) exceeded. You can use the --timeout option to increase the timeout.
2020-06-25 11:22:05:814, 2020-06-25 11:31:59:579, 435640.7686, 733.6922, 446096147, 751300.8463, 1593109326098, -1593108732333, -0.0003, -0.2800
Metric Name Value
consumer-coordinator-metrics:assigned-partitions:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-coordinator-metrics:commit-latency-avg:{client-id=consumer-perf-consumer-25533-1} : 2.700
consumer-coordinator-metrics:commit-latency-max:{client-id=consumer-perf-consumer-25533-1} : 4.000
consumer-coordinator-metrics:commit-rate:{client-id=consumer-perf-consumer-25533-1} : 0.230
consumer-coordinator-metrics:commit-total:{client-id=consumer-perf-consumer-25533-1} : 119.000
consumer-coordinator-metrics:failed-rebalance-rate-per-hour:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-coordinator-metrics:failed-rebalance-total:{client-id=consumer-perf-consumer-25533-1} : 1.000
consumer-coordinator-metrics:heartbeat-rate:{client-id=consumer-perf-consumer-25533-1} : 0.337
consumer-coordinator-metrics:heartbeat-response-time-max:{client-id=consumer-perf-consumer-25533-1} : 6.000
consumer-coordinator-metrics:heartbeat-total:{client-id=consumer-perf-consumer-25533-1} : 197.000
consumer-coordinator-metrics:join-rate:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-coordinator-metrics:join-time-avg:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:join-time-max:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:join-total:{client-id=consumer-perf-consumer-25533-1} : 1.000
consumer-coordinator-metrics:last-heartbeat-seconds-ago:{client-id=consumer-perf-consumer-25533-1} : 2.000
consumer-coordinator-metrics:last-rebalance-seconds-ago:{client-id=consumer-perf-consumer-25533-1} : 593.000
consumer-coordinator-metrics:partition-assigned-latency-avg:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:partition-assigned-latency-max:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:partition-lost-latency-avg:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:partition-lost-latency-max:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:partition-revoked-latency-avg:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-coordinator-metrics:partition-revoked-latency-max:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-coordinator-metrics:rebalance-latency-avg:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:rebalance-latency-max:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:rebalance-latency-total:{client-id=consumer-perf-consumer-25533-1} : 83.000
consumer-coordinator-metrics:rebalance-rate-per-hour:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-coordinator-metrics:rebalance-total:{client-id=consumer-perf-consumer-25533-1} : 1.000
consumer-coordinator-metrics:sync-rate:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-coordinator-metrics:sync-time-avg:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:sync-time-max:{client-id=consumer-perf-consumer-25533-1} : NaN
consumer-coordinator-metrics:sync-total:{client-id=consumer-perf-consumer-25533-1} : 1.000
consumer-fetch-manager-metrics:bytes-consumed-rate:{client-id=consumer-perf-consumer-25533-1, topic=topic_test8_cons_test1} : 434828205.989
consumer-fetch-manager-metrics:bytes-consumed-rate:{client-id=consumer-perf-consumer-25533-1} : 434828205.989
consumer-fetch-manager-metrics:bytes-consumed-total:{client-id=consumer-perf-consumer-25533-1, topic=topic_test8_cons_test1} : 460817319851.000
consumer-fetch-manager-metrics:bytes-consumed-total:{client-id=consumer-perf-consumer-25533-1} : 460817319851.000
consumer-fetch-manager-metrics:fetch-latency-avg:{client-id=consumer-perf-consumer-25533-1} : 58.870
consumer-fetch-manager-metrics:fetch-latency-max:{client-id=consumer-perf-consumer-25533-1} : 503.000
consumer-fetch-manager-metrics:fetch-rate:{client-id=consumer-perf-consumer-25533-1} : 48.670
consumer-fetch-manager-metrics:fetch-size-avg:{client-id=consumer-perf-consumer-25533-1, topic=topic_test8_cons_test1} : 9543108.526
consumer-fetch-manager-metrics:fetch-size-avg:{client-id=consumer-perf-consumer-25533-1} : 9543108.526
consumer-fetch-manager-metrics:fetch-size-max:{client-id=consumer-perf-consumer-25533-1, topic=topic_test8_cons_test1} : 11412584.000
consumer-fetch-manager-metrics:fetch-size-max:{client-id=consumer-perf-consumer-25533-1} : 11412584.000
consumer-fetch-manager-metrics:fetch-throttle-time-avg:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-fetch-manager-metrics:fetch-throttle-time-max:{client-id=consumer-perf-consumer-25533-1} : 0.000
consumer-fetch-manager-metrics:fetch-total:{client-id=consumer-perf-consumer-25533-1} : 44889.000
Exception in thread "main" java.util.IllegalFormatConversionException: f != java.lang.Integer
at java.base/java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4426)
at java.base/java.util.Formatter$FormatSpecifier.printFloat(Formatter.java:2951)
at java.base/java.util.Formatter$FormatSpecifier.print(Formatter.java:2898)
at java.base/java.util.Formatter.format(Formatter.java:2673)
at java.base/java.util.Formatter.format(Formatter.java:2609)
at java.base/java.lang.String.format(String.java:2897)
at scala.collection.immutable.StringLike.format(StringLike.scala:354)
at scala.collection.immutable.StringLike.format$(StringLike.scala:353)
at scala.collection.immutable.StringOps.format(StringOps.scala:33)
at kafka.utils.ToolsUtils$.$anonfun$printMetrics$3(ToolsUtils.scala:60)
at kafka.utils.ToolsUtils$.$anonfun$printMetrics$3$adapted(ToolsUtils.scala:58)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.utils.ToolsUtils$.printMetrics(ToolsUtils.scala:58)
at kafka.tools.ConsumerPerformance$.main(ConsumerPerformance.scala:82)
at kafka.tools.ConsumerPerformance.main(ConsumerPerformance.scala)
https://medium.com/metrosystemsro/apache-kafka-how-to-test-performance-for-clients-configured-with-ssl-encryption-3356d3a0d52b it's a post I've written on this.
Expectation according to this KIP is for rebalance.time.ms and fetch.time.ms to show total rebalance time for the consumer group and the total fetching time excluding the rebalance time.
As far I can tell, as of Apache Kafka version 2.6.0 this is still a work in progress and currently the output are timestamps in Unix epoch time.
fetch.MB.sec and fetch.nMsg.sec are intended to show the average quantity of messages consumed per second (in MB and as a count)
See https://kafka.apache.org/documentation/#consumer_group_monitoring for the consumer group metrics listed with --print-metrics flag
fetch-latency-avg (the average time taken for a fetch request) will vary but this depends a lot on the test setup.
I am trying to import a csv file into orientdb 3.0 I have created and tested the json file and it works with a smaller dataset. But the dataset that I want to import is around a billion rows (six columns)
Following is the user.json file I am using for import with oetl
{
"source": { "file": { "path": "d1.csv" } },
"extractor": { "csv": {} },
"transformers": [
{ "vertex": { "class": "User" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:/databases/magriwebdoc",
"dbType": "graph",
"classes": [
{"name": "User", "extends": "V"}
], "indexes": [
{"class":"User", "fields":["id:string"], "type":"UNIQUE" }
]
}
}
}
This is the console output from oetl command:
2019-05-22 14:31:15:484 INFO Windows OS is detected, 262144 limit of open files will be set for the disk cache. [ONative]
2019-05-22 14:31:15:647 INFO 8261029888 B/7878 MB/7 GB of physical memory were detected on machine [ONative]
2019-05-22 14:31:15:647 INFO Detected memory limit for current process is 8261029888 B/7878 MB/7 GB [ONative]
2019-05-22 14:31:15:649 INFO JVM can use maximum 455MB of heap memory [OMemoryAndLocalPaginatedEnginesInitializer]
2019-05-22 14:31:15:649 INFO Because OrientDB is running outside a container 12% of memory will be left unallocated according to the setting 'memory.leftToOS' not taking into account heap memory [OMemoryAndLocalPaginatedEnginesInitializer]
2019-05-22 14:31:15:650 INFO OrientDB auto-config DISKCACHE=6,477MB (heap=455MB os=7,878MB) [orientechnologies]
2019-05-22 14:31:15:652 INFO System is started under an effective user : `lenovo` [OEngineLocalPaginated]
2019-05-22 14:31:15:670 INFO WAL maximum segment size is set to 6,144 MB [OrientDBEmbedded]
2019-05-22 14:31:15:701 INFO BEGIN ETL PROCESSOR [OETLProcessor]
2019-05-22 14:31:15:703 INFO [file] Reading from file d1.csv with encoding UTF-8 [OETLFileSource]
2019-05-22 14:31:15:703 INFO Started execution with 1 worker threads [OETLProcessor]
2019-05-22 14:31:16:008 INFO Page size for WAL located in D:\databases\magriwebdoc is set to 4096 bytes. [OCASDiskWriteAheadLog]
2019-05-22 14:31:16:703 INFO + extracted 0 rows (0 rows/sec) - 0 rows -> loaded 0 vertices (0 vertices/sec) Total time: 1001ms [0 warnings, 0 errors] [OETLProcessor]
2019-05-22 14:31:16:770 INFO Storage 'plocal:D:\databases/magriwebdoc' is opened under OrientDB distribution : 3.0.18 - Veloce (build 747595e790a081371496f3bb9c57cec395644d82, branch 3.0.x) [OLocalPaginatedStorage]
2019-05-22 14:31:17:703 INFO + extracted 0 rows (0 rows/sec) - 0 rows -> loaded 0 vertices (0 vertices/sec) Total time: 2001ms [0 warnings, 0 errors] [OETLProcessor]
2019-05-22 14:31:17:954 SEVER ETL process has problem: [OETLProcessor]
2019-05-22 14:31:17:956 INFO END ETL PROCESSOR [OETLProcessor]
2019-05-22 14:31:17:957 INFO + extracted 0 rows (0 rows/sec) - 0 rows -> loaded 0 vertices (0 vertices/sec) Total time: 2255ms [0 warnings, 0 errors] [OETLProcessor]D:\orientserver\bin>
I know the code is right but I am assuming it's more of a memory issue!
Please advise what should I do.
Have you tried improving your memory settings, according to the size of the data that you want to process?
From the documentation, you can custom these properties:
Configuration Environmental Variables (See $ORIENTDB_OPTS_MEMORY parameter)
Performance-Tuning - Memory Settings
Maybe could help you
Your json script seems no problem, but you can try to delete your indexes part. I have encountered the same problem because of the wrong indexes, too. It may because the UNIQUE indexes constraint. You can try:
Delete the indexes part of json script.
If you need this index, make sure to clear you database before you import your dataset.
Im trying to train my own dataset on SegNet (with caffe), I prepared the dataset same as segnet tutorial. when I try to run the train, it shows me this error:
I0915 08:33:50.851986 49060 net.cpp:482] Collecting Learning Rate and Weight Decay.
I0915 08:33:50.852017 49060 net.cpp:247] Network initialization done.
I0915 08:33:50.852030 49060 net.cpp:248] Memory required for data: 1064448016
I0915 08:33:50.852730 49060 solver.cpp:42] Solver scaffolding done.
I0915 08:33:50.853065 49060 solver.cpp:250] Solving VGG_ILSVRC_16_layer
I0915 08:33:50.853080 49060 solver.cpp:251] Learning Rate Policy: step
F0915 08:33:51.324506 49060 math_functions.cu:123] Check failed: status == CUBLAS_STATUS_SUCCESS (11 vs. 0) CUBLAS_STATUS_MAPPING_ERROR
*** Check failure stack trace: ***
# 0x7fa27a0d3daa (unknown)
# 0x7fa27a0d3ce4 (unknown)
# 0x7fa27a0d36e6 (unknown)
# 0x7fa27a0d6687 (unknown)
# 0x7fa27a56946e caffe::caffe_gpu_asum<>()
# 0x7fa27a54b264 caffe::SoftmaxWithLossLayer<>::Forward_gpu()
# 0x7fa27a440b29 caffe::Net<>::ForwardFromTo()
# 0x7fa27a440f57 caffe::Net<>::ForwardPrefilled()
# 0x7fa27a436745 caffe::Solver<>::Step()
# 0x7fa27a43707f caffe::Solver<>::Solve()
# 0x406676 train()
# 0x404bb1 main
# 0x7fa2795e5f45 (unknown)
# 0x40515d (unknown)
# (nil) (unknown)
my dataset is .jpg (train) .png (labels gray-scale images) and .txt file as in the tutorial. what can be the problem? thanks for helping
The ground-truth images should be 1 channel 0-255 images without alpha layer, so the NN will recognise the difference between the classes.
img = Image.open(filename).convert('L') # Not 'LA' (A - alpha)
Thanks to isn4, Here is the resolution:
Turns out you have to change the range of the pixel values as well as the actual number of pixel values. Segnet gets confused if you have 256 possible pixel values (0-255) and don't have class weightings for each of them. So I changed all of my PNG label images from 255 and 0 as the pixel possibilities to 1 and 0 as the pixel possibilities.
Here's my python script for doing so:
import os
import cv2
import numpy as np
img = cv2.imread('/usr/local/project/old_png_labels/label.png, 0)
a_img = np.array(img, np.double)
normalized = cv2.normalize(img, a_img, 1.0, 0.0, cv2.NORM_MINMAX)
cv2.imwrite('/usr/local//project/png_labels/label.png, normalized)