I am trying to setup a scan station using my Canon CanoScan LiDE 110 connected to a pi zero w running DietPi 8.1.2.
Initially I had saned running and it worked fine.
scanimage -L listed my scanner and I was able to scan from a networked system using xsane.
I decided to try to use scanbd to make scanning easier with the buttons on the scanner.
following docs here and here
I installed the scanbd package,
copied the original config to /etc/scanbd/sane.d
root#cscan:/etc/scanbd/sane.d# egrep -v '^#|^$' {dll,net}.conf
dll.conf:genesys
Created a symlink for good measure
root#cscan:/var/log# ls -la /usr/local/etc/scanbd
lrwxrwxrwx 1 root root 12 Feb 7 08:53 /usr/local/etc/scanbd -> /etc/scanbd/
modified the existing saned config to point to net only and localhost, so that it wouldn't try to connect directly to the scanner,
root#cscan:/etc/sane.d# egrep -v '^#|^$' {net,dll}.conf
net.conf: connect_timeout = 3
net.conf: localhost
dll.conf:net
enable and start systemd stuff
root#cscan:/var/log# systemctl status {scanbd,scanbm.socket}
? scanbd.service - Scanner button polling Service
Loaded: loaded (/lib/systemd/system/scanbd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-02-07 08:53:41 EST; 36min ago
Main PID: 2438 (scanbd)
Tasks: 5 (limit: 991)
CGroup: /system.slice/scanbd.service
??2438 /usr/sbin/scanbd -f -c /etc/scanbd/scanbd.conf
Feb 07 08:53:41 cscan systemd[1]: Started Scanner button polling Service.
Feb 07 08:53:41 cscan scanbd[2438]: /usr/sbin/scanbd: dbus match type='signal',interface='or
g.freedesktop.Hal.Manager'
Feb 07 08:57:59 cscan scanbd[2438]: /usr/sbin/scanbd: trigger action for scan for device gen
esys:libusb:001:003 with script test.script
Feb 07 08:58:00 cscan scanbd[2475]: /usr/share/scanbd/scripts/test.script: Begin of scan for
device genesys:libusb:001:003
Feb 07 08:58:00 cscan scanbd[2478]: /usr/share/scanbd/scripts/test.script: End of scan for
device genesys:libusb:001:003
? scanbm.socket - scanbd/saned incoming socket
Loaded: loaded (/lib/systemd/system/scanbm.socket; enabled; vendor preset: enabled)
Active: active (listening) since Mon 2022-02-07 08:53:49 EST; 36min ago
Listen: [::]:6566 (Stream)
Accepted: 5; Connected: 0;
Tasks: 0 (limit: 991)
CGroup: /system.slice/scanbm.socket
Feb 07 08:53:49 cscan systemd[1]: Listening on scanbd/saned incoming socket.
It appears that the scan button is launching the test.script as expected (see above)
but scanbd -f doesn't seem to connect to my scanner.
root#cscan:/etc/scanbd# scanbd -d7 -f
scanbd: foreground
scanbd: reading config file /etc/scanbd/scanbd.conf
scanbd: debug on: level: 7
scanbd: dropping privs to uid saned
scanbd: dropping privs to gid scanner
scanbd: group scanner has member:
scanbd: saned
scanbd: drop privileges to gid: 115
scanbd: Running as effective gid 115
scanbd: drop privileges to uid: 108
scanbd: Running as effective uid 108
scanbd: dbus_init
scanbd: dbus match type='signal',interface='org.freedesktop.Hal.Manager'
scanbd: SANE_CONFIG_DIR not set
scanbd: sane version 1.0
scanbd: Scanning for local-only devices
scanbd: start_sane_threads
scanbd: no devices, not starting any polling thread
scanbd: start dbus thread
scanbd: Not Primary Owner (2)
scanbd: udev init
scanbd: get udev monitor
scanbd: udev fd is non-blocking, now setting to blocking mode
scanbd: start udev thread
scanbd: timeout: 500 ms
scanbd: Iteration on dbus call
scanbd: udev thread started
scanbd: Iteration on dbus call
SANE_CONFIG_DIR variable appears to be defined in scanbd.conf
root#cscan:/etc/scanbd# grep -r SANE_CONFIG_DIR
scanbd.conf: saned_env = { "SANE_CONFIG_DIR=/etc/scanbd" } # list of environment vars for saned
And, scanimage -L does work locally, but not remotely
root#cscan:~# scanimage -L
device `net:localhost:genesys:libusb:001:002' is a Canon LiDE 110 flatbed scanner
user#desktop:/etc/sane.d $ scanimage -L
No scanners were identified. If you were expecting something different,
check that the scanner is plugged in, turned on and detected by the
sane-find-scanner tool (if appropriate). Please read the documentation
which came with this software (README, FAQ, manpages).
At this point I have run out of ideas. Is there anyone out there who can help me out with this? My ultimate goal is to be able to network scan via saned using xsane and saneTwain, and use the front panel buttons to do script magic.
Related
I installed Kali Linux via VMware and did a full system upgrade:
apt-get update
apt-get upgrade
apt-get full-upgrade
As part of the upgrade postgresql upgraded from v11 to v12. I followed the instructions to finish this part of the upgrade:
pg_dropcluster 12 main --stop
pg_upgradecluster 11 main
pg_dropcluster 11 main
I start postgresql, initialize metasploit, and start Armitage:
/etc/init.d/postgresql start
msfdb init
armitage
The only console output appears unrelated:
Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on
-Dswing.aatext=true
I do get the popup box with the connection information. I found that I get the "Unexpected end of file from server" if I use 'localhost' as the host, so - per their instructions - I change it to the external IP (in this case 192.168.9.134). I checked metasploit-framework/config/database.yml for
the port and login credentials.
After clicking 'Connect' with this information I get a connection window stating:
Connecting to 192.168.9.134:5432 Connection refused (Connection
refused)
There's also the progress bar that over time will completely fill up (unless I click 'Cancel'). After which nothing happens. As I run the command from the terminal I can see that the process is still running (I don't get my prompt back) but the window disappears and Armitage doesn't actually start. The log file, as verified by pg_lsclusters (/var/log/postgresql/postgresql-12-main.log) doesn't is actually empty.
The link I mentioned before suggests that the problem could either be not enough RAM (I set the VM to have 4gb and free -m shows):
total used free shared buff/cache available
Mem: 3964 803 2677 29 483 2787
Swap: 4093 0 4093
Or that the Metasploit RPC daemon never started (that window does come up the first time, but not subsequent times). I verified that it's running via msfdb status:
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; disabled; vendor preset: disabled)
Active: active (exited) since Fri 2020-02-07 16:06:52 EST; 19min ago
Process: 1753 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 1753 (code=exited, status=0/SUCCESS)
Feb 07 16:06:52 kali systemd1: Starting PostgreSQL RDBMS... Feb 07
16:06:52 kali systemd1: Started PostgreSQL RDBMS.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME postgres
1735 postgres 3u IPv6 32516 0t0 TCP localhost:5432 (LISTEN)
postgres 1735 postgres 4u IPv4 32517 0t0 TCP localhost:5432
(LISTEN)
UID PID PPID C STIME TTY STAT TIME CMD postgres 1735
1 0 16:06 ? Ss 0:00 /usr/lib/postgresql/12/bin/postgres -D
/var/lib/postgresql/12/main -c
config_file=/etc/postgresql/12/main/postgresql.conf
[+] Detected configuration file
(/usr/share/metasploit-framework/config/database.yml)
Also, running regular Metasploit appears to work fine (msfconsole) and loads without error (not sure if there's any output that would be helpful here). I don't use postgresql directly, so I haven't messed with any configuration nor do I have any other applications (that I'm aware of) that use it, so it should be a pretty clean setup (not to mention this is a fresh install of Kali Linux). I'm out of ideas for what to check next. An online search didn't seem to match this problem well. Any thoughts?
Armitage has been deprecated for some time now, as it has not been updated since 2015, and is (to some extent) incompatible with current versions of metasploit.
Although this may not fix your problem, I suggest not using software this much out of date.
I am trying to turn on my TV using a Raspberry Pi.
I have followed the below instructions and added my remote config file, however, am having no luck! Any suggestions.
When running sudo /etc/init.d/lircd status, I get
lircd.service - Flexible IR remote input/output application support
Loaded: loaded (/lib/systemd/system/lircd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2018-11-11 13:27:07 UTC; 5min ago
Docs: man:lircd(8)
http://lirc.org/html/configure.html
Main PID: 334 (lircd)
CGroup: /system.slice/lircd.service
└─334 /usr/sbin/lircd --nodaemon
Nov 11 13:32:23 raspberrypi lircd[334]: lircd-0.9.4c[334]: Info: removed client
Nov 11 13:32:23 raspberrypi lircd-0.9.4c[334]: Info: removed client
Nov 11 13:32:42 raspberrypi lircd[334]: lircd-0.9.4c[334]: Notice: accepted new client on /var/run/lirc/lircd
Nov 11 13:32:42 raspberrypi lircd-0.9.4c[334]: Notice: accepted new client on /var/run/lirc/lircd
Nov 11 13:32:42 raspberrypi lircd[334]: lircd-0.9.4c[334]: Info: removed client
Nov 11 13:32:42 raspberrypi lircd-0.9.4c[334]: Info: removed client
Nov 11 13:32:54 raspberrypi lircd[334]: lircd-0.9.4c[334]: Notice: accepted new client on /var/run/lirc/lircd
Nov 11 13:32:54 raspberrypi lircd-0.9.4c[334]: Notice: accepted new client on /var/run/lirc/lircd
Nov 11 13:32:54 raspberrypi lircd[334]: lircd-0.9.4c[334]: Info: removed client
Nov 11 13:32:54 raspberrypi lircd-0.9.4c[334]: Info: removed client
Here are the steps I took to set it up.
# Add the following lines to /etc/modules file
lirc_dev
lirc_rpi gpio_in_pin=18 gpio_out_pin=17
# Add the following lines to /etc/lirc/hardware.conf file
LIRCD_ARGS="--uinput --listen"
LOAD_MODULES=true
DRIVER="default"
DEVICE="/dev/lirc0"
MODULES="lirc_rpi"
# Update the following line in /boot/config.txt
dtoverlay=lirc-rpi,gpio_in_pin=18,gpio_out_pin=17
# Update the following lines in /etc/lirc/lirc_options.conf
driver = default
device = /dev/lirc0
$ sudo /etc/init.d/lircd stop
$ sudo /etc/init.d/lircd start
# Check status to make lirc is running
$ sudo /etc/init.d/lircd status
# Reboot before testing
$ reboot
Just run into the same problem. There are two main parts to it:
Part 1: new LIRC config
With the new version on lirc 0.9.0+, the configuration needed is much less:
The driver is already included in the kernel, no need to edit anything in modules
The new config syntax is much different, there's a shell script provided to change an old config to the new one. Run: sudo /usr/share/lirc/lirc-old2new.sh
To summarise, you only need to change the /etc/lirc/lirc_options.conf. In particular, you need to edit the lines to driver = default AND device = /dev/lirc0.
This should solve part 1.
Part 2: new IR drivers
As you can see in the /boot/overlays/README, the LIRC driver is being outdated. There are new ones provided for IR input and output. The driver for IR output is the new gpio-ir-tx. You need to use that instead of lirc-rpi in your /boot/config.txt.
In summary, change dtoverlay=lirc-rpi,gpio_out_pin=17,gpio_in_pin=13 to
dtoverlay=gpio-ir-tx,gpio_pin=17
NOTE the missing _out in the config. This driver only supports output, so no need for an input one. To handle inputs, use the gpio-ir one.
I installed posgresql from digitalocean and in the end of installation prints the below command in terminal
/usr/lib/postgresql/10/bin/pg_ctl -D /var/lib/postgresql/10/main -l logfile start
I tried to run it with sudo root user and also with switching to postgres user but gives me below error
waiting for server to start..../bin/sh: 1: cannot create logfile:
Permission denied stopped waiting pg_ctl: could not start server
but when i check the status it says
● postgresql.service - PostgreSQL RDBMS Loaded: loaded
(/lib/systemd/system/postgresql.service; enabled; vendor preset:
enabled) Active: active (exited) since Thu 2018-05-31 13:11:18 UTC;
56s ago Main PID: 3698 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 2362) CGroup: /system.slice/postgresql.service
May 31 13:11:18 staging systemd1: Starting PostgreSQL RDBMS... May
31 13:11:18 staging systemd1: Started PostgreSQL RDBMS.
Status is not running except is exited.what the above command do and how can i run it ? I haven't faced it in previous versions
The idea is that you supply your actual log file instead of logfile, but I recommend that you configure logging properly in postgresql.conf and use pg_ctl without the -l option.
Set logging_collector to on.
Set log_filename to postgresql-%a.log.
Set log_rotation_size to 0.
Set log_truncate_on_rotation to on.
Then you'll get the log files in the log subdirectory of your PostgreSQL data directory, and they will be rotated on a weekly basis.
Digitalocean disabled my droplet's internet access. After fixing the error (rollback to older backup) they restored the internet access. But afterwards I constantly get an error when deploying, I can't seem to get my Postgres database up and running.
I'm getting an error each time I try to deploy my application.
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
So I used SSH to login to my server and check if my Postgres was actually running with:
pg_lsclusters
Results into:
Ver Cluster Port Status Owner Data directory Log file
9.5 main 5432 down postgres /var/lib/postgresql/9.5/main /var/log/postgresql/postgresql-9.5-main.log
Postgres server status
So my Postgres server seems to be down. I tried putting it 'up' again with:
pg_ctlcluster 9.5 main start After doing so I got the error: Insecure directory in $ENV{PATH} while running with -T switch at /usr/bin/pg_ctlcluster line 403.
And /usr/bin/pg_ctlcluster on line 403 says:
system 'systemctl', 'is-active', '-q', "postgresql\#$version-$cluster";
But I'm not to sure what the problem could be here and how I could fix this.
Update
I also tried updating the permissions on /bin to 755 as mentioned here. Sadly that did not fix my problem.
Update 2
I changed the /usr/bin to 755. Now when I try pg_ctlcluster 9.5 main start, I get this:
Job for postgresql#9.5-main.service failed because the control process exited with error code. See "systemctl status postgresql#9.5-main.service" and "journalctl -xe" for details.
And inside the systemctl status postgresql#9.5-main.service:
postgresql#9.5-main.service - PostgreSQL Cluster 9.5-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-01-28 17:32:38 EST; 45s ago
Process: 22473 ExecStart=postgresql#%i --skip-systemctl-redirect %i start (code=exited, status=1/FAILURE)
Jan 28 17:32:08 *url* systemd[1]: Starting PostgreSQL Cluster 9.5-main...
Jan 28 17:32:38 *url* postgresql#9.5-main[22473]: The PostgreSQL server failed to start.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Control process exited, code=exited status=1
Jan 28 17:32:38 *url* systemd[1]: Failed to start PostgreSQL Cluster 9.5-main.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Unit entered failed state.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Failed with result 'exit-code'.
Thanks!
You better not mix systemctl and pg_ctlcluster. Let systemctl makes the calls to pg_ctlcluster with the right user and permissions. You should start your postgresql instance with
sudo systemctl start postgresql#9.5-main.service
Also, check the errors in the startup log. You can post them too, to help you figure out what's going on.
Your systemctl status also outputs that the service is disable, so, when the server reboots, you will have to start the service manually. To enable it run:
sudo systemctl enable postgresql#9.5-main.service
I hope it helps
It is mainly because /etc/hosts file is somehow changed.I have removed extra space inside /etc/hosts file.Use cat /etc/hosts
Add these lines into the file
127.0.0.1 localhost
127.0.1.1 your-host-name
::1 ip6-localhost ip6-loopback
And I have given permission 644 to /etc/hosts file.It is working for me even after the reboot of the system.
i have a setup where i am using 3 mesos masters and 3 mesos slasves. after making all the required configurations i can see 3 mesos masters are part of a cluster which is maintained by zookeepers.
now i have setup 3 mesos slaves and when i am starting mesos-slave service, i am expecting that mesos slaves will be available to the mesos masters web UI page. But i can not see any of them in the slaves tab.
selinux, firewall, iptalbes all are disabled. able to perform ssh between node.
[cloud-user#slave1 ~]$ sudo systemctl status mesos-slave -l
mesos-slave.service - Mesos Slave
Loaded: loaded (/usr/lib/systemd/system/mesos-slave.service; enabled)
Active: active (running) since Sat 2016-01-16 16:11:55 UTC; 3s ago
Main PID: 2483 (mesos-slave)
CGroup: /system.slice/mesos-slave.service
├─2483 /usr/sbin/mesos-slave --master=zk://10.0.0.2:2181,10.0.0.6:2181,10.0.0.7:2181/mesos --log_dir=/var/log/mesos --containerizers=docker,mesos --executor_registration_timeout=5mins
├─2493 logger -p user.info -t mesos-slave[2483]
└─2494 logger -p user.err -t mesos-slave[2483]
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: I0116 16:11:55.628670 2497 detector.cpp:482] A new leading master (UPID=master#127.0.0.1:5050) is detected
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: I0116 16:11:55.628732 2497 slave.cpp:729] New master detected at master#127.0.0.1:5050
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: I0116 16:11:55.628825 2497 slave.cpp:754] No credentials provided. Attempting to register without authentication
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: I0116 16:11:55.628844 2497 slave.cpp:765] Detecting new master
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: I0116 16:11:55.628872 2497 status_update_manager.cpp:176] Pausing sending status updates
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: E0116 16:11:55.628922 2503 process.cpp:1911] Failed to shutdown socket with fd 11: Transport endpoint is not connected
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: I0116 16:11:55.629093 2502 slave.cpp:3215] master#127.0.0.1:5050 exited
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: W0116 16:11:55.629107 2502 slave.cpp:3218] Master disconnected! Waiting for a new master to be elected
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: E0116 16:11:55.983531 2503 process.cpp:1911] Failed to shutdown socket with fd 11: Transport endpoint is not connected
Jan 16 16:11:57 slave1.novalocal mesos-slave[2494]: E0116 16:11:57.465049 2503 process.cpp:1911] Failed to shutdown socket with fd 11: Transport endpoint is not connected
So the problematic line is:
Jan 16 16:11:55 slave1.novalocal mesos-slave[2494]: I0116 16:11:55.629093 2502 slave.cpp:3215] master#127.0.0.1:5050 exited
Specifically, note it's detecting the master as having the IP address 127.0.0.1. The Mesos Agent[1] sees that IP address, and tries to connect which fails (The master isn't running on the same machine as the agent).
This happens because the master announces what it thinks it's IP address is into Zookeeper. In your case, the master is thinking it's IP is 127.0.0.1 and then storing that into zk. Mesos has several configuration flags to control this behavior, mainly --hostname, --no-hostname_lookup, --ip, --ip_discovery_command, and via setting the environment variable LIBPROCESS_IP. See http://mesos.apache.org/documentation/latest/configuration/ for details about them and what they do.
The best thing you can do to make sure things work out of the box is to make sure the machines have resolvable hostnames. Mesos does a reverse-DNS lookup of the boxes hostname in order to figure out what IP people will contact it from.
If you can't get the hostnames setup properly, I would recommend setting --hostname and --ip manually which should cause mesos to announce exactly what you want.
[1]The mesos slave has been renamed to agent, see: https://issues.apache.org/jira/browse/MESOS-1478