xdotool can't detect headless firefox on redhat No GUI - redhat

Xdotool can't control a headless instance of firefox Started with ( -- headless, - marionette) args.
i have tried :
xdotool search firefox
result :
Defaulting to search window name, class, and class name.
# ps -ef |grep firefox
root 26978 26956 80 13:07 ? 00:00:04 /usr/lib64/firefox/firefox -marionette -devtools -headless -foreground -no-remote -profile /tmp/rust_mozprofile.4opdLmN4k4Kh
root 27087 26978 9 13:08 ? 00:00:00 /usr/lib64/firefox/firefox -contentproc -childID 1 -isForBrowser -intPrefs 42:0|44:0|74:0|236:1| -boolPrefs 5:0|61:1|81:1|235:0|277:0|280:0|303:0| -stringPrefs 289:36;891fda99-943a-4201-9d2d-10d20e19cbe4| -schedulerPrefs 0001,2 -greomni /usr/lib64/firefox/omni.ja -appomni /usr/lib64/firefox/browser/omni.ja -appdir /usr/lib64/firefox/browser 26978 tab
root 27225 26978 20 13:08 ? 00:00:00 /usr/lib64/firefox/firefox -contentproc -childID 2 -isForBrowser -intPrefs 42:0|44:0|74:0|236:1| -boolPrefs 5:0|61:1|81:1|235:0|277:0|280:0|303:0| -stringPrefs 289:36;891fda99-943a-4201-9d2d-10d20e19cbe4| -schedulerPrefs 0001,2 -greomni /usr/lib64/firefox/omni.ja -appomni /usr/lib64/firefox/browser/omni.ja -appdir /usr/lib64/firefox/browser 26978 tab
root 28163 6735 0 12:13 pts/0 00:00:00 /bin/sh /usr/bin/xvfb-run -s :99 -auth /tmp/xvfb.auth -ac -screen 0 1920x1080x24 firefox -headless -marionette
root 28205 28163 0 12:13 pts/0 00:00:08 /usr/lib64/firefox/firefox -headless -marionette
root 29630 6735 0 13:08 pts/0 00:00:00 grep --color=auto firefox
root 29773 28205 0 12:14 pts/0 00:00:00 /usr/lib64/firefox/firefox -contentproc -childID 1 -isForBrowser -boolPrefs 303:0| -stringPrefs 289:36;65ada97a-bd97-4caa-8ca0-591ccabedcf1| -schedulerPrefs 0001,2 -greomni /usr/lib64/firefox/omni.ja -appomni /usr/lib64/firefox/browser/omni.ja -appdir /usr/lib64/firefox/browser 28205 tab
# xdotool search firefox
Defaulting to search window name, class, and classname
as you see many instances running but can't find any window
also tried:
# xdotool getactivewindow
Your windowmanager claims not to support _NET_ACTIVE_WINDOW, so the attempt to query the active window aborted.
xdo_get_active_window reported an error

Related

Why logrotate doesn't properly postrotate only has 1 day delay

I have in /etc/logrotate.d/mikrotik :
/var/log/mikrotik.log {
rotate 2
daily
compress
dateext
dateyesterday
dateformat .%Y-%m-%d
postrotate
#/usr/sbin/invoke-rc.d syslog-ng reload >/dev/null
rsync -avH /var/log/mikrotik*.gz /backup/logs/mikrotik/
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
The mikrotik.log.YYYY-MM-DD.gz file is created daily
The problem is that rsync in postrotate doesn't copy the last file. For example, on September 25, 2021, there are such files in /var/log:
-rw-r ----- 1 root adm 37837 Sep 24 23:49 mikrotik.log. 2021-09-24.gz
-rw-r ----- 1 root adm 36980 Sep 25 23:55 mikrotik.log. 2021-09-25.gz
and in /backup/logs/mikrotik/ are only:
-rw-r ----- 1 root adm 35495 Sep 23 00:00 mikrotik.log. 2021-09-22.gz
-rw-r ----- 1 root adm 36842 Sep 23 23:58 mikrotik.log. 2021-09-23.gz
-rw-r ----- 1 root adm 37837 Sep 24 23:49 mikrotik.log. 2021-09-24.gz
There is no file mikrotik.log.2021-09-25.gz from Sep 25 23:55 it will not be copied until the next rotation.
How to make a file packed today copied by postrotate ?
Problem solved.
It relied on the order in which the operations were performed.
Lgrotate does a 'postrotate' section before compressing to .gz.
The solution to the problem was to change the name from 'postrotate' to 'lastaction'.

elasticsearch connector doesn't work - java.lang.NoClassDefFoundError: com/google/common/collect/ImmutableSet

Kafka elasticsearch connector "confluentinc-kafka-connect-elasticsearch-5.5.0" doesn't work on-prem.
"java.lang.NoClassDefFoundError: com/google/common/collect/ImmutableSet\n\tat io.searchbox.client.AbstractJestClient.<init>(AbstractJestClient.java:38)\n\tat io.searchbox.client.http.JestHttpClient.<init>(JestHttpClient.java:43)\n\tat io.searchbox.client.JestClientFactory.getObject(JestClientFactory.java:51)\n\tat io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:149)\n\tat io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:141)\n\tat io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:122)\n\tat io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:51)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:305)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"
I'm using also mssql connector and s3 connector plugins in the same path; they work but elasticsearch plugin gives noclassfound error. This is my folder structure in worker:
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls
confluentinc-kafka-connect-elasticsearch-5.5.0 confluentinc-kafka-connect-s3-5.5.0 debezium-connector-sqlserver kafka-connect-shell-sink-5.1.0
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls -l
total 16
drwxrwxr-x 2 root root 4096 May 25 22:15 confluentinc-kafka-connect-elasticsearch-5.5.0
drwxrwxr-x 5 root root 4096 May 15 02:26 confluentinc-kafka-connect-s3-5.5.0
drwxrwxr-x 2 root root 4096 May 15 02:26 debezium-connector-sqlserver
drwxrwxr-x 4 root root 4096 May 15 02:26 kafka-connect-shell-sink-5.1.0
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls debezium-connector-sqlserver
debezium-api-1.1.1.Final.jar debezium-connector-sqlserver-1.1.1.Final.jar debezium-core-1.1.1.Final.jar mssql-jdbc-7.2.2.jre8.jar
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls confluentinc-kafka-connect-s3-5.5.0
assets etc lib manifest.json
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls confluentinc-kafka-connect-elasticsearch-5.5.0 -l
total 8356
-rw-r--r-- 1 root root 17558 May 25 11:53 common-utils-5.5.0.jar
-rw-r--r-- 1 root root 263965 May 25 11:53 commons-codec-1.9.jar
-rw-r--r-- 1 root root 61829 May 25 11:53 commons-logging-1.2.jar
-rw-r--r-- 1 root root 79845 May 25 19:34 compress-lzf-1.0.3.jar
-rw-r--r-- 1 root root 241622 May 25 11:53 gson-2.8.5.jar
-rw-r--r-- 1 root root 2329410 May 25 19:34 guava-18.0.jar
-rw-r--r-- 1 root root 1140290 May 25 19:34 hppc-0.7.1.jar
-rw-r--r-- 1 root root 179335 May 25 11:53 httpasyncclient-4.1.3.jar
-rw-r--r-- 1 root root 747794 May 25 11:53 httpclient-4.5.3.jar
-rw-r--r-- 1 root root 323824 May 25 11:53 httpcore-4.4.6.jar
-rw-r--r-- 1 root root 356644 May 25 11:53 httpcore-nio-4.4.6.jar
-rw-r--r-- 1 root root 280996 May 25 19:34 jackson-core-2.8.2.jar
-rw-r--r-- 1 root root 22191 May 25 11:53 jest-6.3.1.jar
-rw-r--r-- 1 root root 276130 May 25 11:53 jest-common-6.3.1.jar
-rw-r--r-- 1 root root 621992 May 25 19:34 joda-time-2.8.2.jar
-rw-r--r-- 1 root root 62226 May 25 19:34 jsr166e-1.1.0.jar
-rw-r--r-- 1 root root 83179 May 25 11:53 kafka-connect-elasticsearch-5.5.0.jar
-rw-r--r-- 1 root root 1330394 May 25 19:34 netty-3.10.5.Final.jar
-rw-r--r-- 1 root root 41139 May 25 11:53 slf4j-api-1.7.26.jar
-rw-r--r-- 1 root root 49754 May 25 19:34 t-digest-3.0.jar
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$
I've read some messages there are missing jar files / dependencies in the elasticsearch connector, I added them as you can see above but no luck.
This is my connector config:
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
name: "elastic-files-connector"
labels:
strimzi.io/cluster: mssql-minio-connect-cluster
spec:
class: io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
config:
connection.url: "https://escluster-es-http.dev-kik.io:9200"
connection.username: "${file:/opt/kafka/external-configuration/elasticcreds/connector.properties:connection.username}"
connection.password: "${file:/opt/kafka/external-configuration/elasticcreds/connector.properties:connection.password}"
flush.timeout.ms: 10000
max.buffered.events: 20000
batch.size: 2000
topics: filesql1.dbo.Files
tasks.max: '1'
type.name: "_doc"
max.request.size: "536870912"
key.converter: io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url: http://schema-registry-cp-schema-registry:8081
value.converter: io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url: http://schema-registry-cp-schema-registry:8081
internal.key.converter: org.apache.kafka.connect.json.JsonConverter
internal.value.converter: org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable: true
value.converter.schemas.enable: true
schema.compatibility: NONE
errors.tolerance: all
errors.deadletterqueue.topic.name: "dlq_filesql1.dbo.Files"
errors.deadletterqueue.context.headers.enable: "true"
errors.log.enable: "true"
behavior.on.null.values: "ignore"
errors.retry.delay.max.ms: 60000
errors.retry.timeout: 300000
behavior.on.malformed.documents: warn
I changed username/password as plaintext; no luck.
I tried for both http/https for elasticsearch connection, no luck.
This is my elasticsearch srv information:
devadmin#vdi-mk2-ubn:~/kafka$ kubectl get svc -n elastic-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elastic-webhook-server ClusterIP 10.104.95.105 <none> 443/TCP 21h
escluster-es-default ClusterIP None <none> <none> 8h
escluster-es-http LoadBalancer 10.108.69.136 192.168.215.35 9200:31214/TCP 8h
escluster-es-transport ClusterIP None <none> 9300/TCP 8h
kibana-kb-http LoadBalancer 10.102.81.206 192.168.215.34 5601:31315/TCP 20h
devadmin#vdi-mk2-ubn:~/kafka$
I can connec to to elastic search services from Kafka Connect Worker by the both ways:
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ curl -u "elastic:5NM0Pp25sFzNu578873BWFnN" -k "https://10.108.69.136:9200"
{
"name" : "escluster-es-default-0",
"cluster_name" : "escluster",
"cluster_uuid" : "TP5f4MGcSn6Dt9hZ144tEw",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ curl -u "elastic:5NM0Pp25sFzNu578873BWFnN" -k "https://escluster-es-http.dev-kik.io:9200"
{
"name" : "escluster-es-default-0",
"cluster_name" : "escluster",
"cluster_uuid" : "TP5f4MGcSn6Dt9hZ144tEw",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
[kafka#mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$
no matter what I do, exception never changes at all. I don't know there is something else which I can try. My brain is burning, I'm about to get crazy.
Am I missing anything or could you please advise how you guys run this connector on-prem on Kubernetes?
Thanks & regards
I'm using kafka_2. 12-2.5.0 and I had the same problem. I've noticed that guava jar is missing in $KAFKA_HOME/libs with respect to Kafka 2. 4.0. As a workaround, I copied the jar manually (guava-20.0. jar) from the previous Kafka distribution and everything worked.
If you use a connector, make sure you load its dependencies as well.
See https://github.com/confluentinc/kafka-connect-elasticsearch/issues/588#issuecomment-1407751798

Perl touch -t file error for a future date

I am trying to touch a file(for referencing date) with a future date something like -
Current date - $date
Fri Jan 6 03:59:55 EST 2017
touch -t 201702032359.59 /var/tmp/ME_FILE_END
on checking the timestamp of the file as -
$ ls -lrt /var/tmp/ME_FILE_END
getting an output with only date and not the entire timestamp(hhmm.sec)
-rw-r--r-- 1 abcproc abc 0 Feb 3 2017 /var/tmp/ME_FILE_END
But for a date with is less than or equal to current it gives correct result -
touch -t 201612010000.00 /var/tmp/ME_FILE_START
ls -lrt /var/tmp/ME_FILE_START
-rw-r--r-- 1 abcproc abc 0 Dec 1 00:00 /var/tmp/ME_FILE_START
Can someone please suggest why this discrepancy ?
It's just the way ls displays the date. When far from now, the modification time is not displayed.
If you want details regarding the last access / modification / change time, you should be using stat.
stat /var/tmp/ME_FILE_END
You will see the expected output.
For example:
[10:29:41]dabi#gaia:~$ touch -t 201702032359.59 /var/tmp/ME_FILE_END
[10:29:43]dabi#gaia:~$ ls -ltr /var/tmp/ME_FILE_END
-rw-rw-r-- 1 dabi dabi 0 feb. 3 2017 /var/tmp/ME_FILE_END
[10:29:47]dabi#gaia:~$ stat /var/tmp/ME_FILE_END
File : '/var/tmp/ME_FILE_END'
Size : 0 Blocks : 0 I/O blocks : 4096 empty file
Device : 803h/2051d Inode : 5374373 Links : 1
Access : (0664/-rw-rw-r--) UID : ( 1000/ dabi) GID : ( 1000/ dabi)
Access : 2017-02-03 23:59:59.000000000 +0100
Change : 2017-02-03 23:59:59.000000000 +0100
Change : 2017-01-06 10:29:43.364630503 +0100
Birth : -

Touchscreen on Raspberry Pi emits click not touch

i folowed this link to calibrate touchscreen: http://www.circuitbasics.com/raspberry-pi-touchscreen-calibration-screen-rotation/.
ls -la /dev/input/
total 0
drwxr-xr-x 4 root root 240 Jul 12 18:38 .
drwxr-xr-x 15 root root 3460 Jul 12 18:38 ..
drwxr-xr-x 2 root root 140 Jul 12 18:38 by-id
drwxr-xr-x 2 root root 140 Jul 12 18:38 by-path
crw-rw---- 1 root input 13, 64 Jul 12 18:38 event0
crw-rw---- 1 root input 13, 65 Jul 12 18:38 event1
crw-rw---- 1 root input 13, 66 Jul 12 18:38 event2
crw-rw---- 1 root input 13, 67 Jul 12 18:38 event3
crw-rw---- 1 root input 13, 68 Jul 12 18:38 event4
crw-rw---- 1 root input 13, 63 Jul 12 18:38 mice
crw-rw---- 1 root input 13, 32 Jul 12 18:38 mouse0
crw-rw---- 1 root input 13, 33 Jul 12 18:38 mouse1
root#raspberrypi:/sys/devices/virtual/input# cat input4/uevent
PRODUCT=0/0/0/0
NAME="FT5406 memory based driver"
PROP=2
EV=b
KEY=400 0 0 0 0 0 0 0 0 0 0
ABS=2608000 3
MODALIAS=input:b0000v0000p0000e0000-e0,1,3,k14A,ra0,1,2F,35,36,39,mlsfw
root#raspberrypi:~# cat /etc/ts.conf
# Uncomment if you wish to use the linux input layer event interface
module_raw input
# Uncomment if you're using a Sharp Zaurus SL-5500/SL-5000d
# module_raw collie
# Uncomment if you're using a Sharp Zaurus SL-C700/C750/C760/C860
# module_raw corgi
# Uncomment if you're using a device with a UCB1200/1300/1400 TS interface
# module_raw ucb1x00
# Uncomment if you're using an HP iPaq h3600 or similar
# module_raw h3600
# Uncomment if you're using a Hitachi Webpad
# module_raw mk712
# Uncomment if you're using an IBM Arctic II
# module_raw arctic2
module pthres pmin=1
module variance delta=30
module dejitter delta=100
module linear
I only get response when configuring X with xinput_calibrator. When i enter this command
sudo TSLIB_FBDEVICE=/dev/fb0 TSLIB_TSDEVICE=/dev/input/event1 ts_calibrate
I get optput
xres = 800, yres = 480
selected device is not a touchscreen I understand
Can someone please help me,
Thanks in advance.
I don't have a solution for this, but I believe that it is related to the problem of touches being treated as mouseovers. This bug has been reported several times, but never actually fixed
https://gitlab.gnome.org/GNOME/gtk/-/issues/945
https://bugzilla.gnome.org/show_bug.cgi?id=789041
https://bugs.launchpad.net/ubuntu-mate/+bug/1792787
A bugzilla.gnome.org user named niteshgupta16 created a script that solves this problem, but it was uploaded to pasting/sharing service called hastebin at https://www.hastebin.com/uwuviteyeb.py.
Hastebin deletes files that have not been accessed within 30 days. Since hastebin is a javascript-obfuscated service, this file is not available on archive.org.
I am unable to find an email for niteshgupta16 in order to ask him if he still has uwuviteyeb.py.

"diff --starting-file=FILE"how to use this option?

I'm reading about source code of 'diff' recently,and I'm confuzing about diff -S[FILE] or --starting-file=FILE option.I have done some tests to verify it,but I can't get what I want.there are something about my tests:
ls -l /tmp/Nibnat/diffutils-2.7/
-rw-rw-r--. 1 Nibnat Nibnat 9 5月 9 13:46 diff.c
-rw-rw-r--. 1 Nibnat Nibnat 9 5月 9 13:57 file_a
-rw-rw-r--. 1 Nibnat Nibnat 9 5月 9 13:57 file_b
-rw-rw-r--. 1 Nibnat Nibnat 9 5月 9 13:57 file_c
-rw-rw-r--. 1 Nibnat Nibnat 20 5月 9 11:46 heh.c
ls -l /tmp/Nibnat/test_dir/
-rw-rw-r--. 1 Nibnat Nibnat 5 5月 9 11:45 diff.c
-rw-rw-r--. 1 Nibnat Nibnat 5 5月 9 13:56 testfile_a
-rw-rw-r--. 1 Nibnat Nibnat 5 5月 9 13:56 testfile_b
-rw-rw-r--. 1 Nibnat Nibnat 5 5月 9 13:56 testfile_c
-rw-rw-r--. 1 Nibnat Nibnat 5 5月 9 13:56 testfile_d
/tmp/Nibnat/diffutils-2.7/diff.c is different from /tmp/Nibnat/test_dir/diff.c,when I want to compare these two directories,I want compare from /tmp/Nibnat/diffutils-2.7/file_a(skip diff.c),so I use command
'diff -S /tmp/Nibnat/diffutils-2.7/file_a /tmp/Nibnat/diffutils-2.7/ /tmp/Nibnat/test_dir/'
I get this:
diff -S /tmp/Nibnat/diffutils-2.7/file_a /tmp/Nibnat/diffutils-2.7/diff.c /tmp/Nibnat/test_dir/diff.c
1c1
< hahahehe
---
> haha
Only in /tmp/Nibnat/diffutils-2.7/: file_a
Only in /tmp/Nibnat/diffutils-2.7/: file_b
Only in /tmp/Nibnat/diffutils-2.7/: file_c
Only in /tmp/Nibnat/diffutils-2.7/: heh.c
Only in /tmp/Nibnat/diffutils-2.7/: test_a.c
Only in /tmp/Nibnat/diffutils-2.7/: test_b.c
Only in /tmp/Nibnat/diffutils-2.7/: test_c.c
Only in /tmp/Nibnat/test_dir/: testfile_a
Only in /tmp/Nibnat/test_dir/: testfile_b
Only in /tmp/Nibnat/test_dir/: testfile_c
Only in /tmp/Nibnat/test_dir/: testfile_d
it still did't skip diff.c.
any help is thankful.
Just use -S diff.c, i.e. just the filename, not the complete pathname:
$ ls -l /tmp/TST/a
total 12
-rw-r--r-- 1 orpe orpe 3 Apr 25 16:16 diff.c
-rw-r--r-- 1 orpe orpe 2 Apr 25 16:17 file_a
-rw-r--r-- 1 orpe orpe 2 Apr 25 16:17 file_b
$ ls -l /tmp/TST/b
total 12
-rw-r--r-- 1 orpe orpe 3 Apr 25 16:16 diff.c
-rw-r--r-- 1 orpe orpe 2 Apr 25 16:17 testfile_a
-rw-r--r-- 1 orpe orpe 2 Apr 25 16:17 testfile_b
$ diff /tmp/TST/a /tmp/TST/b
diff /tmp/TST/a/diff.c /tmp/TST/b/diff.c
1c1
< aa
---
> bb
Only in /tmp/TST/a: file_a
Only in /tmp/TST/a: file_b
Only in /tmp/TST/b: testfile_a
Only in /tmp/TST/b: testfile_b
$ diff -S file_a /tmp/TST/a /tmp/TST/b
Only in /tmp/TST/a: file_a
Only in /tmp/TST/a: file_b
Only in /tmp/TST/b: testfile_a
Only in /tmp/TST/b: testfile_b
$ diff -S file_b /tmp/TST/a /tmp/TST/b
Only in /tmp/TST/a: file_b
Only in /tmp/TST/b: testfile_a
Only in /tmp/TST/b: testfile_b
$