I want to set my printer to monochrome mode in raspberry pi in python with pycups - raspberry-pi

My printer Epson_L1110_Series I set in CUPS. It supports color print and now works only color mode.I am using "printFile() function for printing and I should print in monochrome mode.Thank you for help.
import cups
conn = cups.Connection()
file_name = '/home/pi/Downloads/File.pdf'
conn.printFile("EPSON_L1110_Series", file_name, "Pi Report",{"cpi": "6","lpi":"6","orientation-requested":"3", "print-color-mode":"monochrome"})
but it is not working.Printer still printed with color.

Related

Raspberry Pi 4 Aborts SSH Connection When TensorFlow Lite Has Initialized After Running Python 3 Script

I want to use object detection using tensorflow lite in order to detect a clear face or a covered face where the statement "door opens" is printed when a clear face is detected. I could run this code smoothly previously but later after rebooting raspberry pi 4, although the tensorflow lite runtime is initialized, the raspberry pi 4 disconnects with the ssh completely. The following is the code:
######## Webcam Object Detection Using Tensorflow-trained Classifier #########
#
# Author: Evan Juras
# Date: 10/27/19
# Description:
# This program uses a TensorFlow Lite model to perform object detection on a live webcam
# feed. It draws boxes and scores around the objects of interest in each frame from the
# webcam. To improve FPS, the webcam object runs in a separate thread from the main program.
# This script will work with either a Picamera or regular USB webcam.
#
# This code is based off the TensorFlow Lite image classification example at:
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/python/label_image.py
#
# I added my own method of drawing boxes and labels using OpenCV.
# Import packages
import os
import argparse
import cv2
import numpy as np
import sys
import time
from threading import Thread
import importlib.util
import simpleaudio as sa
# Define VideoStream class to handle streaming of video from webcam in separate processing thread
# Source - Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/
class VideoStream:
"""Camera object that controls video streaming from the Picamera"""
def __init__(self,resolution=(640,480),framerate=30):
# Initialize the PiCamera and the camera image stream
self.stream = cv2.VideoCapture(0)
ret = self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
ret = self.stream.set(3,resolution[0])
ret = self.stream.set(4,resolution[1])
# Read first frame from the stream
(self.grabbed, self.frame) = self.stream.read()
# Variable to control when the camera is stopped
self.stopped = False
def start(self):
# Start the thread that reads frames from the video stream
Thread(target=self.update,args=()).start()
return self
def update(self):
# Keep looping indefinitely until the thread is stopped
while True:
# If the camera is stopped, stop the thread
if self.stopped:
# Close camera resources
self.stream.release()
return
# Otherwise, grab the next frame from the stream
(self.grabbed, self.frame) = self.stream.read()
def read(self):
# Return the most recent frame
return self.frame
def stop(self):
# Indicate that the camera and thread should be stopped
self.stopped = True
# Define and parse input arguments
parser = argparse.ArgumentParser()
parser.add_argument('--modeldir', help='Folder the .tflite file is located in',
required=True)
parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',
default='masktracker.tflite')
parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',
default='facelabelmap.txt')
parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
default=0.5)
parser.add_argument('--resolution', help='Desired webcam resolution in WxH. If the webcam does not support the resolution entered, errors may occur.',
default='640x480')
parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
action='store_true')
args = parser.parse_args()
MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
min_conf_threshold = float(args.threshold)
resW, resH = args.resolution.split('x')
imW, imH = int(resW), int(resH)
use_TPU = args.edgetpu
# Import TensorFlow libraries
# If tflite_runtime is installed, import interpreter from tflite_runtime, else import from regular tensorflow
# If using Coral Edge TPU, import the load_delegate library
pkg = importlib.util.find_spec('tflite_runtime')
if pkg:
from tflite_runtime.interpreter import Interpreter
if use_TPU:
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
if use_TPU:
from tensorflow.lite.python.interpreter import load_delegate
# If using Edge TPU, assign filename for Edge TPU model
if use_TPU:
# If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'
if (GRAPH_NAME == 'masktracker.tflite'):
GRAPH_NAME = 'edgetpu.tflite'
# Get path to current working directory
CWD_PATH = os.getcwd()
# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)
# Load the label map
with open(PATH_TO_LABELS, 'r') as f:
labels = [line.strip() for line in f.readlines()]
# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
del(labels[0])
# Load the Tensorflow Lite model.
# If using Edge TPU, use special load_delegate argument
if use_TPU:
interpreter = Interpreter(model_path=PATH_TO_CKPT,
experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
print(PATH_TO_CKPT)
else:
interpreter = Interpreter(model_path=PATH_TO_CKPT)
interpreter.allocate_tensors()
# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
floating_model = (input_details[0]['dtype'] == np.float32)
input_mean = 127.5
input_std = 127.5
# Initialize frame rate calculation
frame_rate_calc = 1
freq = cv2.getTickFrequency()
global image_capture
image_capture = 0
# Initialize video stream
videostream = VideoStream(resolution=(imW,imH),framerate=30).start()
time.sleep(1)
#for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
while True:
# Start timer (for calculating frame rate)
t1 = cv2.getTickCount()
# Grab frame from video stream
frame1 = videostream.read()
# Acquire frame and resize to expected shape [1xHxWx3]
frame = frame1.copy()
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_resized = cv2.resize(frame_rgb, (width, height))
input_data = np.expand_dims(frame_resized, axis=0)
# Normalize pixel values if using a floating model (i.e. if model is non-quantized)
if floating_model:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection by running the model with the image as input
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
#num = interpreter.get_tensor(output_details[3]['index'])[0] # Total number of detected objects (inaccurate and not needed)
# Loop over all detections and draw detection box if confidence is above minimum threshold
for i in range(len(scores)):
if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):
# Get bounding box coordinates and draw box
# Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
ymin = int(max(1,(boxes[i][0] * imH)))
xmin = int(max(1,(boxes[i][1] * imW)))
ymax = int(min(imH,(boxes[i][2] * imH)))
xmax = int(min(imW,(boxes[i][3] * imW)))
# Draw label
object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
if (object_name=="face unclear" ):
color = (0, 255, 0)
cv2.rectangle(frame, (xmin,ymin), (xmax,ymax),color, 2)
print("Face Covered: Door Not Opened")
if(image_capture == 0):
path = r'/home/pi/Desktop/tflite_1/photographs'
date_string = time.strftime("%Y-%m-%d_%H%M%S")
#print(date_string)
cv2.imwrite(os.path.join(path, (date_string + ".jpg")),frame)
#cv2.imshow("Photograph",frame)
#mp3File = input(alert_audio.mp3)
print("Photo Taken")
#w_object = sa.WaveObject.from_wave_file('alert_audio.wav')
#p_object = w_object.play()
#p_object.wait_done()
image_capture = 1
else:
color = (0, 0, 255)
cv2.rectangle(frame, (xmin,ymin), (xmax,ymax),color, 2)
print("Face Clear: Door Opened")
image_capture = 0
#cv2.rectangle(frame, (xmin,ymin), (xmax,ymax),color, 2)
#image = np.asarray(ImageGrab.grab())
label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text
if ((scores[0] < min_conf_threshold)):
cv2.putText(frame,"No Face Detected",(260,260),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, color=(255,0,0))
print("No Face Detected")
image_capture = 0
# Draw framerate in corner of frame
cv2.putText(frame,'FPS: {0:.2f}'.format(frame_rate_calc),(30,50),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,0),2,cv2.LINE_AA)
# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)
# Calculate framerate
t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc= 1/time1
# Press 'q' to quit
if cv2.waitKey(1) == ord('q'):
break
# Clean up
cv2.destroyAllWindows()
videostream.stop()
Any help is appreciated.
Regards,
MD

How to show the Color image feed from kinect using freenect2-python

I'm using freenect2-python to read frames from kinectv2. Following is my code:
from freenect2 import Device, FrameType
import cv2
import numpy as np
def callback(type_, frame):
print(f'{type_}, {frame.format}')
if type_ is FrameType.Color: # FrameFormat.BGRX
rgb = frame.to_array().astype(np.uint8)
cv2.imshow('rgb', rgb[:,:,0:3])
device = Device()
while True:
device.start(callback)
if cv2.waitKey(1) & 0xFF == ord('q'):
device.stop()
break
The color frame format is FrameFormat.BGRX, so I'm taking the first 3 channels to show the image. But it shows a blank black window.
I used PIL but it opens a new window for each frame it receives. Is there a way to show frames in the same window in PIL?
The cv2.imshow couldn't display anything because it was continuously getting updated with a new frame. I added cv2.waitkey(100) after cv2.imshow and it worked.
def callback(type_, frame):
print(f'{type_}, {frame.format}')
if type_ is FrameType.Color: # FrameFormat.BGRX
rgb = frame.to_array().astype(np.uint8)
cv2.imshow('rgb', rgb[:,:,0:3])
# added the following line of code
cv2.imshow(100)

Grove I2C display not working on Raspberry Pi 3B

I only have my RPi 3B and a I²C display, I don't own any GrovePi hat and I want to show some text to the said display. Is there a way to make it work?
My display.
i2cdetect -y 1 shows this result, meaning the RPi is detecting the I²C display and it should work. But nothing seemed to be showing on the display, except for the full block on the first row.
I've tried literally everything on the internet, some library worked (as in throwing no errors) but the display still stays the same, full block on the first row.
I've installed python3, smbus, smbus2 and i2c-tools. I've changed the address to 3e.
My most recent *.py file. It does write 'LCD printing' yet nothing else works.
I don't find a way to change the contrast of the LCD (no potentiometer or whatsoever)
#!/usr/bin/python3
import smbus2 as smbus
import time
# Define some device parameters
I2C_ADDR = 0x3e # I2C device address, if any error, change this address to 0x3f
LCD_WIDTH = 16 # Maximum characters per line
# Define some device constants
LCD_CHR = 1 # Mode - Sending data
LCD_CMD = 0 # Mode - Sending command
LCD_LINE_1 = 0x80 # LCD RAM address for the 1st line
LCD_LINE_2 = 0xC0 # LCD RAM address for the 2nd line
LCD_LINE_3 = 0x94 # LCD RAM address for the 3rd line
LCD_LINE_4 = 0xD4 # LCD RAM address for the 4th line
LCD_BACKLIGHT = 0x08 # On
ENABLE = 0b00000100 # Enable bit
# Timing constants
E_PULSE = 0.0005
E_DELAY = 0.0005
# Open I2C interface
# bus = smbus.SMBus(0) # Rev 1 Pi uses 0
bus = smbus.SMBus(1) # Rev 2 Pi uses 1
def lcd_init():
# Initialise display
lcd_byte(0x33, LCD_CMD) # 110011 Initialise
lcd_byte(0x32, LCD_CMD) # 110010 Initialise
lcd_byte(0x06, LCD_CMD) # 000110 Cursor move direction
lcd_byte(0x0C, LCD_CMD) # 001100 Display On,Cursor Off, Blink Off
lcd_byte(0x28, LCD_CMD) # 101000 Data length, number of lines, font size
lcd_byte(0x01, LCD_CMD) # 000001 Clear display
time.sleep(E_DELAY)
def lcd_byte(bits, mode):
# Send byte to data pins
# bits = the data
# mode = 1 for data
# 0 for command
bits_high = mode | (bits & 0xF0) | LCD_BACKLIGHT
bits_low = mode | ((bits << 4) & 0xF0) | LCD_BACKLIGHT
# High bits
bus.write_byte(I2C_ADDR, bits_high)
lcd_toggle_enable(bits_high)
# Low bits
bus.write_byte(I2C_ADDR, bits_low)
lcd_toggle_enable(bits_low)
def lcd_toggle_enable(bits):
# Toggle enable
time.sleep(E_DELAY)
bus.write_byte(I2C_ADDR, (bits | ENABLE))
time.sleep(E_PULSE)
bus.write_byte(I2C_ADDR, (bits & ~ENABLE))
time.sleep(E_DELAY)
def lcd_string(message, line):
# Send string to display
message = message.ljust(LCD_WIDTH, " ")
lcd_byte(line, LCD_CMD)
for i in range(LCD_WIDTH):
lcd_byte(ord(message[i]), LCD_CHR)
if __name__ == '__main__':
lcd_init()
while True:
# Send some test
lcd_string("Hello ", LCD_LINE_1)
lcd_string(" World", LCD_LINE_2)
print('LCD printing!')
time.sleep(3)

Powershell >6.0... command line background color with ANSI sequences (PSReadLine) ?how?

In PowerShell >6.0 I can change the command line FOREGROUND colors with:
Set-PSReadLineOption -Colors #{ Keyword="#0FAFE0"; Variable="#987ABC" }+
but how do I change the BACKGROUND colors with RGB??? (#RRGGBB)
i can see some examples with ASCII console sequences
but none with the RGB format
I don't think you can? SelectionColor uses the ansi escape code for black on white: "`e[30;47m" (ps 7 for the `e). https://en.wikipedia.org/wiki/ANSI_escape_code#Colors
How about this from that wikipeda page:
ESC[ 38;2;⟨r⟩;⟨g⟩;⟨b⟩ m -- choose RGB foreground color
ESC[ 48;2;⟨r⟩;⟨g⟩;⟨b⟩ m -- choose RGB background color
Red foreground (255 0 0) blue (0 0 255) background.
Set-PSReadLineOption -Colors #{ variable = "`e[38;2;255;0;0m" + # fg
"`e[48;2;0;0;255m" } # bg
In ps5 you have to say $([char]0x1b) instead of `e.
$e = [char]0x1b
Set-PSReadLineOption -Colors #{ variable = "$e[38;2;255;0;0m" + # fg
"$e[48;2;0;0;255m" } # bg

Ghostscript -- convert PS to PNG, rotate, and scale

I have an application that creates very nice data plots rendered in PostScript with letter size and landscape mode. An example of the input file is at http://febo.com/uploads/blip.ps. [ Note: this image renders properly in a viewer, but the PNG conversion comes out with the image sideways. ] I need to convert these PostScript files into PNG images that are scaled down and rotated 90 degrees for web presentation.
I want to do this with ghostscript and no other external tool, because the conversion program will be used on both Windows and Linux systems and gs seems to be a common denominator. (I'm creating a perl script with a "PS2png" function that will call gs, but I don't think that's relevant to the question.)
I've searched the web and spent a couple of days trying to modify examples I've found, but nothing I have tried does the combination of (a) rotate, (b) resize, (c) maintain the aspect ratio and (d) avoid clipping.
I did find an example that injects a "scale" command into the postscript stream, and that seems to work well to scale the image to the desired size while maintaining the aspect ratio. But I can't find a way to rotate the resized image so that the, e.g., 601 x 792 point (2504 x 3300 pixel) postscript input becomes an 800 x 608 pixel png output.
I'm looking for the ghostscript/postscript fu that I can pass to the gs command line to accomplish this.
I've tried gs command lines with various combinations of -dFIXEDMEDIA, -dFitPage, -dAutoRotatePages=/None, or /All, -c "<> setpagedevice", changing -dDISPLAYWIDTHPOINTS and -dDISPLAYHEIGHTPOINTS, -g[width]x[height], -dUseCropBox with rotated coordinates, and other things I've forgotten. None of those worked, though it wouldn't surprise me if there's a magic combination of some of them that will. I just haven't been able to find it.
Here is the core code that produces the scaled but not rotated output:
## "$molps" is the input ps file read to a variable
## insert the PS "scale" command
$molps = $sf . " " . $sf . " scale\n" . $molps;
$gsopt1 = " -r300 -dGraphicsAlphaBits=4 -dTextAlphaBits=4";
$gsopt1 = $gsopt1 . " -dDEVICEWIDTHPOINTS=$device_width_points";
$gsopt1 = $gsopt1 . " -dDEVICEHEIGHTPOINTS=$device_height_points";
$gsopt1 = $gsopt1 . " -sOutputFile=" . $outfile;
$gscmd = "gs -q -sDEVICE=pnggray -dNOPAUSE -dBATCH " . $gsopt1 . " - ";
system("echo \"$molps\" \| $gscmd");
$device_width_points and $device_height_points are calculated by taking the original image size and applying the scaling factor $sf.
I'll be grateful to anyone who can show me the way to accomplish this. Thanks!
Better Answer:
You almost had it with your initial research. Just set orientation in the gs call:
... | gs ... -dAutoRotatePages=/None -c '<</Orientation 3>> setpagedevice' ...
cf. discussion of setpagedevice in the Red Book, and ghostscript docs (just before section 6.2)
Original Answer:
As well as "scale", you need "rotate" and "translate", not necessarily in that order.
Presumably these are single-page PostScript files?
If you know the bounding box of the Postscript, and the dimensions of the png, it is not too arduous to calculate the necessary transformation. It'll be about one line of code. You just need to ensure you inject it in the correct place.
Chapter 6 of the Blue Book has lots of details
A ubc.ca paper provides some illustrated examples (skip to page 4)
Simple PostScript file to play around with. You'll just need the three translate,scale,rotate commands in some order. The rest is for demonstrating what's going on.
%!
% function to define a 400x400 box outline, origin at 0,0 (bottom left)
/box { 0 0 moveto 0 400 lineto 400 400 lineto 400 0 lineto closepath } def
box clip % pretend the box is our bounding box
clippath stroke % draw initial black bounding box
(Helvetica) findfont 50 scalefont setfont % setup a font
% draw box, and show some text # 100,100
box stroke
100 100 moveto (original) show
% try out some transforms
1 0 0 setrgbcolor % red
.5 .5 scale
box stroke
100 100 moveto (+scaled) show
0 1 0 setrgbcolor % green
300 100 translate
box stroke
100 100 moveto (+translated) show
0 0 1 setrgbcolor % blue
45 rotate
box stroke
100 100 moveto (+rotated) show
showpage
It may be possible to insert the calculated transformation into the gs commandline like this:
... | gs ... -c '1 2 scale 3 4 translate 5 6 rotate' -# ...
Thanks to JHNC, I think I have it licked now, and for the benefit of posterity, here's what worked. (Please upvote JHNC, not this answer.)
## code to determine original size, scaling factor, rotation goes above
my $device_width_points;
my $device_height_points;
my $orientation;
if ($rotation) {
$orientation = 3;
$device_width_points = $ytotal_png_pt;
$device_height_points = $xtotal_png_pt;
} else {
$orientation = 0;
$device_width_points = $xtotal_png_pt;
$device_height_points = $ytotal_png_pt;
}
my $orientation_string =
" -dAutoRotatePages=/None -c '<</Orientation " .
$orientation . ">> setpagedevice'";
## $ps = .ps file read into variable
## insert the PS "scale" command
$ps = $sf . " " . $sf . " scale\n" . $ps;
$gsopt1 = " -r300 -dGraphicsAlphaBits=4 -dTextAlphaBits=4";
$gsopt1 = $gsopt1 . " -dDEVICEWIDTHPOINTS=$device_width_points";
$gsopt1 = $gsopt1 . " -dDEVICEHEIGHTPOINTS=$device_height_points";
$gsopt1 = $gsopt1 . " -sOutputFile=" . $outfile;
$gsopt1 = $gsopt1 . $orientation_string;
$gscmd = "gs -q -sDEVICE=pnggray -dNOPAUSE -dBATCH " . $gsopt1 . " - ";
system("echo \"$ps\" \| $gscmd");
One of the problems I had was that some options apparently don't play well together -- for example, I tried using the -g option to set the output size in pixels, but in that case the rotation didn't work. Using the DEVICE...POINTS commands instead did work.