I can take a screenshot when my local machine is connected to a device using idevicescreenshot original_screenshot.tiff
sips -s format png original_screenshot.tiff --out converted_screenshot.png
but when I try to take a screenshot from a remote device connected to a remote machine like this:
ssh $sshUser#$sshHost "idevicescreenshot original_screenshot.tiff ;
sips -s format png original_screenshot.tiff --out converted_screenshot.png;
exit"
-- I get a black screenshot.
Any idea ?
Related
Using pg_dump and pg_restore on postgres to move a dbs from my local Windows machine to a Linux server seemed so simple:
pg_dump --format=c -U user localdbs > file.pg.dump
Spits out a file. Then upload:
pg_restore -c -d serverdbs -v file.pg.dump
pg_restore: error: input file does not appear to be a valid archive
This works perfectly on Linux to Linux. Changing the filetype makes no difference. Changing the encoding at either end makes no difference. Pulling your hair out makes no difference.
It's the > operator in the pg_dump command. It looks like it works on Windows, it spits out a file but that file is not properly encoded. On Linux the same command works flawlessly.
You have to use -f file.pg.dump instead when using Windows and then everything works.
Hope this saves someone the nightmare I had figuring this out.
I would like to have a tcpdump script which dumps into files let's say every hour.
This I can achieve quite simply like this:
tcpdump -i eth0 -G 3600 -w /tmp/files/<some-name>-%F-%H-%M-%S.pcap -Z root -z gzip
I want to MOVE the "finished" files to s3 for which I'm using the rclone tool:
rclone move /tmp/files remote:<s3 bucket name> --filter "- *.pcap"
All runs fine apart from the fact that whenever I move some any of the *pcap.gz files the currently processed *.pcap file size is enlarged with all the current session data which makes the file pretty big.
Does this mean that I can't move out any of the files from the directory and have to restart the tcpdump command on regular basis?
Thanks
Modify your tcpdump command to add a capture filter that excludes the rclone traffic. For example, assuming the remote IP address and TCP port number are 192.0.2.1 and 1234, respectively, apply a capture filter of "not (host 192.0.2.1 and tcp port 1234)" to exclude that traffic.
The following command works well
$ psql -c "copy (select * from foo limit 3) to stdout csv header"
# output
column1,column2
val1,val2
val3,val4
val5,val6
However the following does not:
$ psql -c "copy (select * from foo limit 3) to program 'gzip -f --stdout' csv header"
# output
COPY 3
Why do I have COPY 3 as the output from this command? I would expect that the output would be the compressed CSV string, after passing through gzip.
The command below works, for instance:
$ psql -c "copy (select * from foo limit 3) to stdout csv header" | gzip -f -c
# output (this garbage is just the compressed string and is as expected)
߉T`M�A �0 ᆬ}6�BL�I+�^E�gv�ijAp���qH�1����� FfВ�,Д���}������+��
How to make a single SQL command that directly pipes the result into gzip and sends the compressed string to STDOUT?
When you use COPY ... TO PROGRAM, the PostgreSQL server process (backend) starts a new process and pipes the file to the process's standard input. The standard output of that process is lost. It only makes sense to use COPY ... TO PROGRAM if the called program writes the data to a file or similar.
If your goal is to compress the data that go across the network, you could use sslmode=require sslcompression=on in your connect string to use the SSL network compression feature I built into PostgreSQL 9.2. Unfortunately this has been deprecated and most OpenSSL binaries are shipped with the feature disabled.
There is currently a native network compression patch under development, but it is questionable whether that will make v14.
Other than that, you cannot get what you want at the moment.
copy is running gzip on the server and not forwarding the STDOUT from gzip on to the client.
You can use \copy instead, which would run gzip on the client:
psql -q -c "\copy (select * from foo limit 3) to program 'gzip -f --stdout' csv header"
This is fundamentally the same as piping to gzip, which you show in your question.
If the goal is to compress the output of copy so it transfers faster over the network, then...
psql "postgresql://ip:port/dbname?sslmode=require&sslcompression=1"
It should display "compression active" if it's enabled. That probably requires some server config variable to be enabled though.
Or you can simply use ssh:
ssh user#dbserver "psql -c \"copy (select * from foo limit 3) to stdout csv header\" | gzip -f -c" >localfile.csv.gz
But... of course, you need ssh access to the db server.
If you don't have ssh to the db server, maybe you have ssh to another box in the same datacenter that has a fast network link to the db server, in that case you can ssh to it instead of the db server. Data will be transferred uncompressed between that box and the database, compressed on the box, and piped via ssh to your local machine. That will even save cpu on the database server since it won't be doing the compression.
If that doesn't work, well then, why not put the ssh command into the "to program" and have the server send it via ssh to your machine? You'll have to setup your router and open a port, but you can do that. Of course you'll have to find a way to put the password in the ssh command line, that's usually a big no-no, but maybe just for once. Or just use netcat instead, that doesn't require a password.
Also, if you want speed, please, use zstd instead of gzip.
Here's an example with netcat. I just tested it and it worked.
On destination machine which is 192.168.0.1:
nc -lp 65001 | zstd -d >file.csv
In another terminal:
psql -c "copy (select * from foo) to program 'zstd -9 |nc -N 192.168.0.1 65001' csv header" test
Note -N option for netcat.
You can use copy to PROGRAM:
COPY foo_table to PROGRAM 'gzip > /tmp/foo_table.csv' delimiters',' CSV HEADER;
I'm using qemu as my raspberry pi emulator.
I use IDE for writing my codes in windows and I am having a hard time in transferring files every time from my windows to qemu.
I tried using winscp, but it did not allow me to connect using default credentials.
Is there anything I need to do or configure to use winscp for transferring files directly??
Go to https://azeria-labs.com/emulate-raspberry-pi-with-qemu/
$ qemu-system-arm -kernel ~/qemu_vms/<your-kernel-qemu> -cpu arm1176 -m 256 -M versatilepb -serial stdio -append "root=/dev/sda2 rootfstype=ext4 rw" -hda ~/qemu_vms/<your-jessie-image.img> -redir tcp:5022::22 -no-reboot
scp -P 5022<file_to_transfer> pi#127.0.0.1:~
Another handy way: :)
Mount .img file in your host (Use Disk Image Mounter in Linux , Don't forget to mount it in non read-only mode)
Add your file to this img file (Don't add it in boot partition)
Umount the img
Load .img file in qemu and find your file in proper path
I am making a MJPEG video stream using Raspberry Pi with dedicated Pi Camera. For this I am using jpeg libraries and the following web application found on Github. The use is pretty straightforward, you just type cd mjpg-streamer/mjpg-streamer-experimental and then ./mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so". However, I would like to make it run on every reboot, so that the camera is "maintenance free".
I researched that I need to put the path and the executable file in the /etc/rc.local. Nevertheless, when I put the path (mjpg-streamer/mjpg-streamer-experimental/mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so") to this executable file, it did not work at all. I tried to run the stream as one command in the Terminal, it did not work either. I also tried to set up a variable PATH in .bashrc in order to access it from /etc/rc.local, but it also did not want to work.
I suspect it might have something to do with command ./mjpg_streamerneeding some input for it to work (-o "output_http.so -w ./www" -i "input_raspicam.so")
Do you have any idea how to start it with every reboot?
Thanks for your time and help
i have solved similar issue for my rpi and jpeg streamer as following.
create a shell script in /home/pi
touch /home/pi/mjpg-streamer.sh
edit that shell script and add this content
#!/bin/bash
cd /home/pi/mjpg-streamer/mjpg-streamer-experimental/
LD_LIBRARY_PATH=.
./mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so"
make sure new shell script has execution rights
add that shell script to your /etc/rc.local