I have an application that creates very nice data plots rendered in PostScript with letter size and landscape mode. An example of the input file is at http://febo.com/uploads/blip.ps. [ Note: this image renders properly in a viewer, but the PNG conversion comes out with the image sideways. ] I need to convert these PostScript files into PNG images that are scaled down and rotated 90 degrees for web presentation.
I want to do this with ghostscript and no other external tool, because the conversion program will be used on both Windows and Linux systems and gs seems to be a common denominator. (I'm creating a perl script with a "PS2png" function that will call gs, but I don't think that's relevant to the question.)
I've searched the web and spent a couple of days trying to modify examples I've found, but nothing I have tried does the combination of (a) rotate, (b) resize, (c) maintain the aspect ratio and (d) avoid clipping.
I did find an example that injects a "scale" command into the postscript stream, and that seems to work well to scale the image to the desired size while maintaining the aspect ratio. But I can't find a way to rotate the resized image so that the, e.g., 601 x 792 point (2504 x 3300 pixel) postscript input becomes an 800 x 608 pixel png output.
I'm looking for the ghostscript/postscript fu that I can pass to the gs command line to accomplish this.
I've tried gs command lines with various combinations of -dFIXEDMEDIA, -dFitPage, -dAutoRotatePages=/None, or /All, -c "<> setpagedevice", changing -dDISPLAYWIDTHPOINTS and -dDISPLAYHEIGHTPOINTS, -g[width]x[height], -dUseCropBox with rotated coordinates, and other things I've forgotten. None of those worked, though it wouldn't surprise me if there's a magic combination of some of them that will. I just haven't been able to find it.
Here is the core code that produces the scaled but not rotated output:
## "$molps" is the input ps file read to a variable
## insert the PS "scale" command
$molps = $sf . " " . $sf . " scale\n" . $molps;
$gsopt1 = " -r300 -dGraphicsAlphaBits=4 -dTextAlphaBits=4";
$gsopt1 = $gsopt1 . " -dDEVICEWIDTHPOINTS=$device_width_points";
$gsopt1 = $gsopt1 . " -dDEVICEHEIGHTPOINTS=$device_height_points";
$gsopt1 = $gsopt1 . " -sOutputFile=" . $outfile;
$gscmd = "gs -q -sDEVICE=pnggray -dNOPAUSE -dBATCH " . $gsopt1 . " - ";
system("echo \"$molps\" \| $gscmd");
$device_width_points and $device_height_points are calculated by taking the original image size and applying the scaling factor $sf.
I'll be grateful to anyone who can show me the way to accomplish this. Thanks!
Better Answer:
You almost had it with your initial research. Just set orientation in the gs call:
... | gs ... -dAutoRotatePages=/None -c '<</Orientation 3>> setpagedevice' ...
cf. discussion of setpagedevice in the Red Book, and ghostscript docs (just before section 6.2)
Original Answer:
As well as "scale", you need "rotate" and "translate", not necessarily in that order.
Presumably these are single-page PostScript files?
If you know the bounding box of the Postscript, and the dimensions of the png, it is not too arduous to calculate the necessary transformation. It'll be about one line of code. You just need to ensure you inject it in the correct place.
Chapter 6 of the Blue Book has lots of details
A ubc.ca paper provides some illustrated examples (skip to page 4)
Simple PostScript file to play around with. You'll just need the three translate,scale,rotate commands in some order. The rest is for demonstrating what's going on.
%!
% function to define a 400x400 box outline, origin at 0,0 (bottom left)
/box { 0 0 moveto 0 400 lineto 400 400 lineto 400 0 lineto closepath } def
box clip % pretend the box is our bounding box
clippath stroke % draw initial black bounding box
(Helvetica) findfont 50 scalefont setfont % setup a font
% draw box, and show some text # 100,100
box stroke
100 100 moveto (original) show
% try out some transforms
1 0 0 setrgbcolor % red
.5 .5 scale
box stroke
100 100 moveto (+scaled) show
0 1 0 setrgbcolor % green
300 100 translate
box stroke
100 100 moveto (+translated) show
0 0 1 setrgbcolor % blue
45 rotate
box stroke
100 100 moveto (+rotated) show
showpage
It may be possible to insert the calculated transformation into the gs commandline like this:
... | gs ... -c '1 2 scale 3 4 translate 5 6 rotate' -# ...
Thanks to JHNC, I think I have it licked now, and for the benefit of posterity, here's what worked. (Please upvote JHNC, not this answer.)
## code to determine original size, scaling factor, rotation goes above
my $device_width_points;
my $device_height_points;
my $orientation;
if ($rotation) {
$orientation = 3;
$device_width_points = $ytotal_png_pt;
$device_height_points = $xtotal_png_pt;
} else {
$orientation = 0;
$device_width_points = $xtotal_png_pt;
$device_height_points = $ytotal_png_pt;
}
my $orientation_string =
" -dAutoRotatePages=/None -c '<</Orientation " .
$orientation . ">> setpagedevice'";
## $ps = .ps file read into variable
## insert the PS "scale" command
$ps = $sf . " " . $sf . " scale\n" . $ps;
$gsopt1 = " -r300 -dGraphicsAlphaBits=4 -dTextAlphaBits=4";
$gsopt1 = $gsopt1 . " -dDEVICEWIDTHPOINTS=$device_width_points";
$gsopt1 = $gsopt1 . " -dDEVICEHEIGHTPOINTS=$device_height_points";
$gsopt1 = $gsopt1 . " -sOutputFile=" . $outfile;
$gsopt1 = $gsopt1 . $orientation_string;
$gscmd = "gs -q -sDEVICE=pnggray -dNOPAUSE -dBATCH " . $gsopt1 . " - ";
system("echo \"$ps\" \| $gscmd");
One of the problems I had was that some options apparently don't play well together -- for example, I tried using the -g option to set the output size in pixels, but in that case the rotation didn't work. Using the DEVICE...POINTS commands instead did work.
Related
My Problem:
I want to use functions of opencv like the MIL-Tracker or MedianFlow-Tracker in Matlab (these functions are not in mexopencv). But I don't know how or understand how to do this. The documentation of opencv/mexopencv doesn't help me. This doesn't help: how do OpenCV shared libraries in matlab? - because the link in the answer is down.
So is there a way to use these functions in Matlab? And if- How?
Why?: As a part of my bachelor thesis I have to compare different already implemented ways to track people.
If you would like to use these functions specifically in MATLAB you could always write your own MEX file in C/C++ and send the data back/forward between the two calls, however this would require some basic C++ knowledge and understanding creating MEX files.
Personally I would definately recommend trying this with Python and the OpenCV Python interface since its so widely used and more supported than using the calls in MATLAB (plus its always a useful skill to be able to switch between Python and MATLAB as and when needed).
There is a full example with the MIL-Tracker and the MedianFlow-Tracker (and others) here (Which demonstrates using them in C++ and Python!).
Python Example :
import cv2
import sys
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
if __name__ == '__main__' :
# Set up tracker.
# Instead of MIL, you can also use
tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN']
tracker_type = tracker_types[2]
if int(minor_ver) < 3:
tracker = cv2.Tracker_create(tracker_type)
else:
if tracker_type == 'BOOSTING':
tracker = cv2.TrackerBoosting_create()
if tracker_type == 'MIL':
tracker = cv2.TrackerMIL_create()
if tracker_type == 'KCF':
tracker = cv2.TrackerKCF_create()
if tracker_type == 'TLD':
tracker = cv2.TrackerTLD_create()
if tracker_type == 'MEDIANFLOW':
tracker = cv2.TrackerMedianFlow_create()
if tracker_type == 'GOTURN':
tracker = cv2.TrackerGOTURN_create()
# Read video
video = cv2.VideoCapture("videos/chaplin.mp4")
# Exit if video not opened.
if not video.isOpened():
print "Could not open video"
sys.exit()
# Read first frame.
ok, frame = video.read()
if not ok:
print 'Cannot read video file'
sys.exit()
# Define an initial bounding box
bbox = (287, 23, 86, 320)
# Uncomment the line below to select a different bounding box
bbox = cv2.selectROI(frame, False)
# Initialize tracker with first frame and bounding box
ok = tracker.init(frame, bbox)
while True:
# Read a new frame
ok, frame = video.read()
if not ok:
break
# Start timer
timer = cv2.getTickCount()
# Update tracker
ok, bbox = tracker.update(frame)
# Calculate Frames per second (FPS)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);
# Draw bounding box
if ok:
# Tracking success
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
else :
# Tracking failure
cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
# Display tracker type on frame
cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);
# Display FPS on frame
cv2.putText(frame, "FPS : " + str(int(fps)), (100,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2);
# Display result
cv2.imshow("Tracking", frame)
# Exit if ESC pressed
k = cv2.waitKey(1) & 0xff
if k == 27 : break
I would definately try it using Python (if this is an option). Otherwise if MATLAB is a must then probably try implementing the C++ example code shown in the link before as a MEX file and linking openCV during the compilation i.e.
mex trackerMexOpenCV.cpp 'true filepath location to openCV lib'
I hope this helps!
Am quite new in the Unix field and I am currently trying to extract data set from a text file. I tried with sed, grep, awk but it seems to only work with extracting lines, but I want to extract an entire dataset... Here is an example of file from which I'd like to extract the 2 data sets (figures after the lines "R.Time Intensity")
[Header]
Application Name LabSolutions
Version 5.87
Data File Name C:\LabSolutions\Data\Antoine\170921_AC_FluoSpectra\069_WT3a derivatized lignin LiCl 430_GPC_FOREVER_430_049.lcd
Output Date 2017-10-12
Output Time 12:07:32
[Configuration]
Instrument Name BOTAN127-Instrument1
Instrument # 1
Line # 1
# of Detectors 3
Detector ID Detector A Detector B PDA
Detector Name Detector A Detector B PDA
# of Channels 1 1 2
[LC Chromatogram(Detector A-Ch1)]
Interval(msec) 500
# of Points 9603
Start Time(min) 0,000
End Time(min) 80,017
Intensity Units mV
Intensity Multiplier 0,001
Ex. Wavelength(nm) 405
Em. Wavelength(nm) 430
R.Time (min) Intensity
0,00000 -709779
0,00833 -709779
0,01667 17
0,02500 3
0,03333 7
0,04167 19
0,05000 9
0,05833 5
0,06667 2
0,07500 24
0,08333 48
[LC Chromatogram(Detector B-Ch1)]
Interval(msec) 500
# of Points 9603
Start Time(min) 0,000
End Time(min) 80,017
Intensity Units mV
Intensity Multiplier 0,001
R.Time (min) Intensity
0,00000 149
0,00833 149
0,01667 -1
I would greatly appreciate any idea. Thanks in advance.
Antoine
awk '/^[^0-9]/&&d{d=0} /R.Time/{d=1}d' file
Brief explanation,
Set d as a flag to determine print line or not
/^[^0-9]/&&d{d=0}: if regex ^[^0-9] matched && d==1, disabled d
/R.Time/{d=1}: if string "R.Time" searched, enabled d
awk '/R.Time/,/LC/' file|grep -v -E "R.Time|LC"
grep part will remove the R.Time and LC lines that come as a part of the output from awk
I think it's a job for sed.
sed '/R.Time/!d;:A;N;/\n$/!bA' infile
I have a YUV file with 150 frames, I want to divide it into 2 files of 75 frames each. How to go bu doing this ? Are there any software's to do this ?
No specific SW is needed. All you need to do is read/write the number of bytes that corresponds to a frame into a new file. "Normally" the YCbCr format used is subsampled according to 4:2:0, i.e. the chroma samples are reduced by a factor of 2 both horizontally and vertically; meaning that 1 frame in YCbCr 4:2:0 corresponds to
1 frame = width x height x 3 / 2 bytes
If you are on a linux based system, you can use the dd utility to extract the first n-frames into a new file like this:
dd if=input.yuv bs=1 count=$((width*height*3/2*num_frames)) of=output.yuv
for the first 10 frames of a 1080p clip, the above would be:
dd if=input.yuv bs=1 count=$((1920*1080*3/2*10)) of=output.yuv
or
dd if=input.yuv bs=1 count=3110400 of=output.yuv
or use your favorite programming/scripting language to do this.
For example, the following python-scripts writes the first 10 frames to a new file (one frame at a time), tweek it to your needs:
#!/usr/bin/env python
f_in = 'BQMall_832x480_60.yuv'
f_out = 'BQMall_first_10_frames.yuv'
f_size = 832*480*3/2
with open(f_in, 'rb') as fd_in, open(f_out, 'wb') as fd_out:
for i in range(10):
data = fd_in.read(f_size)
fd_out.write(data)
I suggest that "do not use bs = 1"
dd if=176x144.yuv bs= $((176 * 144 * 3 / 2)) count=$FrameNo of=output.yuv
I am trying to make a maze in Qbasic but when the pointer touches the maze lines then the program is not ending. I want that when the circle (which is the pointer ) touches the ends of the maze then the program should go to an end.The Program is this:-
cls
screen 12
DIM p AS STRING
DIM A1 AS STRING
15 print"What do you want to do?"
print"A:Draw AN IMAGE"," B:PLAY A MAZE GAME";
PRINT
PRINT"TYPE 'A' OR 'B'IN CAPITAL FORM"
GOTO 102
99 print "rules to play the maze game:"
print
print "1 use 'W' to move the ball foward"
print "2 use 'S' to move the ball backward"
print "3 use 'A' to move the ball leftward"
print "4 use 'D' to move the ball rightward"
INPUT A1
CLS
goto 10
102 INPUT P
if p="A"then
cls
goto 20
elseif p="B" then
cls
goto 99
elseif p<>"A" AND p<>"B" then
print "Choose between A and B"
GOTO 70
end if
10 pset(120,120)
draw "r100"
pset (120,140)
draw"r80"
pset (200,140)
draw "d100"
pset (220,120)
draw"d140"
pset (220,260)
draw "l90"
pset (200,240)
draw "l50"
pset (130,260)
draw"u50l120u90r60u40l50u60r300d90l35d260l60d30l80
h20l20h20u30r40u5l70d60f40r250u90h40u45r40u40r50u130h40r225d65l50d60l15
d130l40d50l20d15r45d40r20u45r10u10r10u90r100"
pset(150,240)
draw"u50l120u50r60u80l50u20r260d50l35d260l60d30l40h20l20h10r
40u50l120d98f50r290u115h40u20r40u40r50u160h10r140d20l50d60l15
d130h40d90l20d60r45d45r70u45r10u10r10u90r75"
20 dim k as string
x = 110
y = 105
do
k = ucase$(inkey$)
if k="W"then
y = y - 2
elseif k= "S" then
y = y + 8
elseif k="A"then
x = x - 8
elseif k="D" then
x = x + 5
end if
circle (x,y),7,10
loop until k ="Q"
GOTO 45
70 CLS
GOTO 15
if x=120 and y=120 then goto 45
40 cls
45 END
Pls Help
Thanks in Advance....
Ok, let's take a peak at your game loop presented below and reformated a bit for readability:
do
k = ucase$(inkey$)
if k="W"then
y = y - 2
elseif k= "S" then
y = y + 8
elseif k="A"then
x = x - 8
elseif k="D" then
x = x + 5
end if
circle (x,y),7,10
loop until k ="Q"
Your win case (if x=120 and y=120 then goto 45) doesn't actually occur within the loop but outside it.
With do loops, only the code between the do and loop statement will execute unless the "until" statement returns true. In order words:
do
'This code will execute
loop until k = "Q"
'This code will only execute after k=Q
Put the win case in the do loop and it should work.
If I recall correctly, QBasic allows whitespace in the beginning of a line. I recommend using whitespace to organize your code visually so you can see what's going on. Look at how I formatted your main loop. Everything that the do loop controls is tabbed to the right of the do and loop statement. This way you can easily see what the do loop is doing. Everything in the if statement gets the same treatment for similar reasons.
If you get in the habit of indenting your code, you can start to see the code's logic laid out cleanly.
Edit: It seems you're new to programming. If you enjoy it, I recommend learning Python through codecademy rather than QBasic. QBasic encourages some very bad habits, like goto statements.
Is it possible to have an image be 100% of the window width and keep it's aspect ratio while resizing an AutoHotKey GUI?
I have a simple GUI as follows:
Gui +Resize
Gui, Add, Picture, w440 h-1 vProductImage, default.png
Gui, Show, , MyApp
The closest thing I have been able to come to is Anchor.ahk http://www.autohotkey.com/board/topic/4105-control-anchoring-v4-for-resizing-windows/
Using it, I can have the image resized when the window is resized but it doesn't keep it's aspect ratio and gets deformed
Does anyone know how I can do this?
Closest thing I could come up with:
Assuming the picture is 440x350, is 85 pixels from the top of the app window (left:0)
GuiSize:
if(A_GuiWidth < A_GuiHeight)
{
GuiControl, MoveDraw, ProductImage, % "w" . (A_GuiWidth - 20) . " h" . (350/440) * (A_GuiWidth - 20)
}
else
{
GuiControl, MoveDraw, ProductImage, % "w" . (440/350) * (A_GuiHeight - 85) . " h" . (A_GuiHeight - 85)
}
return
(the 20 is for window padding)