I have a set of MIDI files whose resolution I need to convert from 96 to 480. The tempo of these MIDI files were set with their resolution at 96. If I convert their resolution to 480, and adjust the note timings to fit the new resolution, do I also need to do some kind of conversion of the BPM to properly denote the timings at the new higher resolution?
Resolution and Tempo are independent.
Resolution is the number of midi clocks per quarter note.
Tempo is the number of quarter notes per minute.
Resolution is set once in the MIDI file header, 96 is the standard value.
Tempo can be changed multiple times during the playback via the SetTempo events.
Your BPM should not change when you increase the resolution as you described above.
See the official specs at https://docs.google.com/viewer?url=https://www.midi.org/component/edocman/rp-001-v1-0-standard-midi-files-specification-96-1-4-pdf/fdocument
Related
The documentation of Tesseract says:
Make sure there are a minimum number of samples of each character. 10
is good, but 5 is OK for rare characters.
There should be more samples of the more frequent characters - at
least 20.
I assume the last sentence means: at least 20 samples of more frequent characters would be OK. But what will be a good frequency?
Also:
Tesseract works best on images which have a DPI of at least 300 dpi,
so it may be beneficial to resize images. For more information see the
FAQ.
Why does Tesseract work best on 300 DPI? Isn't DPI just an setting telling on what scale an image is being printed? Why DPI and not just a minimum height in pixels?
Also, what would be a good height of an character in pixels?
I have added a image reference in my readme.md on github. The picture is a portrait format photo, but when I view it on the github page the picture is rotated.
I have tried clone the repo to a new location to confirm that the picture is indeed still portrait as expected in the repo.
The image part of the readme.md:
Here is a picture of the hardware setup. ![picture of the hardware setup](HelloButtonModule.jpg)
This is the affected github repo
Update
Now I am really baffled I tried to simplify the problem in a new repo, but the picture shows up unrotated as (originally) expected.
Update
I have created a repo with an exact copy of the picture. Then the picture is rotated.
You could try simply opening the file and then re-saving it. You may need to rotate 360 degrees before you save, however, this should work.
If you are on a Debian based distro, you can use exiftran.
sudo apt-get install exiftran
exiftran -ai *.jpg
This will automatically rotate all the .jpg files based on their exif data.
I ran
git clone https://github.com/steenhulthin/HelloButtonModule/
cd HelloButtonModule/
exif HelloButtonModule.jpg
and this produced:
EXIF tags in 'HelloButtonModule.jpg' ('Motorola' byte order):
--------------------+----------------------------------------------------------
Tag |Value
--------------------+----------------------------------------------------------
Image Width |4128
Image Length |2322
Manufacturer |SAMSUNG
Model |GT-I9505
Orientation |Top-left
X-Resolution |72
Y-Resolution |72
Resolution Unit |Inch
Software |I9505XXUDMH8
Date and Time |2013:10:16 23:22:57
YCbCr Positioning |Centred
Image Width |512
Image Length |288
Compression |JPEG compression
Orientation |Right-top
X-Resolution |72
Y-Resolution |72
Resolution Unit |Inch
Exposure Time |1/33 sec.
F-Number |f/2.2
Exposure Program |Normal programme
ISO Speed Ratings |100
Exif Version |Exif Version 2.2
Date and Time (Origi|2013:10:16 23:22:57
Date and Time (Digit|2013:10:16 23:22:57
Components Configura|Y Cb Cr -
Shutter Speed |5.06 EV (1/33 sec.)
Aperture |2.28 EV (f/2.2)
Brightness |2.44 EV (18.56 cd/m^2)
Exposure Bias |0.00 EV
Maximum Aperture Val|2.28 EV (f/2.2)
Metering Mode |Centre-weighted average
Light Source |Unknown
Flash |Flash did not fire
Focal Length |4.2 mm
Maker Note |98 bytes undefined data
User Comment |METADATA-START
FlashPixVersion |FlashPix Version 1.0
Colour Space |sRGB
Pixel X Dimension |4128
Pixel Y Dimension |2322
Sensing Method |One-chip colour area sensor
As you can see, the Orientation tag says top left. This means the EXIF data won't make a difference to the rotation, i.e. the image will appear the same on your computer and on Github.
I then ran
git clone https://github.com/steenhulthin/githubreadmeimagerotation2
cd githubreadmeimagerotation2/
exif HelloButtonModule.jpg
And I got:
EXIF tags in 'HelloButtonModule.jpg' ('Intel' byte order):
--------------------+----------------------------------------------------------
Tag |Value
--------------------+----------------------------------------------------------
Image Width |4128
Image Length |2322
Manufacturer |SAMSUNG
Model |GT-I9505
Orientation |Right-top
X-Resolution |72
Y-Resolution |72
Resolution Unit |Inch
Software |I9505XXUDMH8
Date and Time |2013:10:16 23:22:57
YCbCr Positioning |Centred
Image Width |512
Image Length |288
Compression |JPEG compression
Orientation |Right-top
X-Resolution |72
Y-Resolution |72
Resolution Unit |Inch
Exposure Time |1/33 sec.
F-Number |f/2.2
Exposure Program |Normal programme
ISO Speed Ratings |100
Exif Version |Exif Version 2.2
Date and Time (Origi|2013:10:16 23:22:57
Date and Time (Digit|2013:10:16 23:22:57
Components Configura|Y Cb Cr -
Shutter Speed |5.06 EV (1/33 sec.)
Aperture |2.28 EV (f/2.2)
Brightness |2.44 EV (18.56 cd/m^2)
Exposure Bias |0.00 EV
Maximum Aperture Val|2.28 EV (f/2.2)
Metering Mode |Centre-weighted average
Light Source |Unknown
Flash |Flash did not fire
Focal Length |4.2 mm
Maker Note |98 bytes undefined data
User Comment |METADATA-START
FlashPixVersion |FlashPix Version 1.0
Colour Space |sRGB
Pixel X Dimension |4128
Pixel Y Dimension |2322
Sensing Method |One-chip colour area sensor
Here the orientation says Right-top which means the right top corner of the image is currently in the top left corner. Github does not honor this information, so your image is displayed incorrectly.
I then ran exiftran -ai HelloButtonModule.jpg and this fixed the problem. There is a fork here https://github.com/texasflood/githubreadmeimagerotation2 which shows the correct rotation for the image.
If you are on Windows, IrfanView might work, courtesy of this question: https://superuser.com/questions/36645/how-to-rotate-images-automatically-based-on-exif-data
I think this is caused by github's missing support for the EXIF "Orientation" tag.
Github shows the image data as they are contained in the JPEG file, which is the orientation in which they have been captured by the camera photo sensor. Additionally, the JPEG file includes an EXIF tag "Orientation" containing the value "right, top", which indicates that the image data are not to be interpreted as they are, but that the right side should actually be up. Apparently, github does not honor this tag.
The image in your second repository is not identical to the first one, but seems to have been edited to add the red arrow and text. My guess is that the editor interpreted the "Orientation" tag during loading, and then saved the image data in rotated form and with an "Orientation" tag value of "top, left".
For more information, see e.g. JPEG Rotation and EXIF Orientation.
I still don't understand why this happens (#A.Donda's explanation sound plausible), but I found a solution.
I resized the picture to 50% of the original and picture is no longer rotated.
I would still be happy to know if there are alternatives to resizing though.
I have read a lot about BMP file format structure but I still cannot get what is the real meaning of the fields "biXPelsPermeter" and "biYPelsPermeter". I mean in practical way, how is it used or how it can be utilized. Any example or experience? Thanks a lot
biXPelsPermeter
Specifies the horizontal print resolution, in pixels per meter, of the target device for the bitmap.
biYPelsPermeter
Specifies the vertical print resolution.
Its not very important. You can leave them on 2835 its not going to ruin the image.
(72 DPI × 39.3701 inches per meter yields 2834.6472)
Think of it this way: The image bits within the BMP structure define the shape of the image using that much data (that much information describes the image), but that information must then be translated to a target device using a measuring system to indicate its applied resolution in practical use.
For example, if the BMP is 10,000 pixels wide, and 4,000 pixels high, that explains how much raw detail exists within the image bits. However, that image information must then be applied to some target. It uses the relationship to the dpi and its target to derive the applied resolution.
If it were printed at 1000 dpi then it's only going to give you an image with 10" x 4" but one with extremely high detail to the naked eye (more pixels per square inch). By contrast, if it's printed at only 100 dpi, then you'll get an image that's 100" x 40" with low detail (fewer pixels per square inch), but both of them have the same overall number of bits within. You can actually scale an image without scaling any of its internal image data by merely changing the dpi to non-standard values.
Also, using 72 dpi is a throwback to ancient printing techniques (https://en.wikipedia.org/wiki/Twip) which are not really relevant in moving forward (except to maintain compatibility with standards) as modern hardware devices often use other values for their fundamental relationships to image data. For video screens, for example, Macs use 72 dpi as the default. Windows uses 96 dpi. Others are similar. In theory you can set it to whatever you want, but be warned that not all software honors the internal settings and will instead assume a particular size. This can affect the way images are scaled within the app, even though the actual image data within hasn't changed.
I have an image taken by my ipod touch 4 that's 720x960. In the simulator calling CGImageGetBytesPerRow() on the image returns 2880 (720 * 4 bytes) which is what I expected. However on the device CGImageGetBytesPerRow() will return 3840, the "bytes per row" along the height. Does anyone know why there is different behavior even though the image I'm CGImageGetBytesPerRow() on has a width of 720 and height of 960 in both cases?
Thanks in advance.
Bytes per row can be anything as long as it is sufficient to hold the image bounds, so best not to make assumptions that it will be the minimum to fit the image.
I would guess that on the device, bytes per row is dictated by some or other optimisation or hardware consideration: perhaps an image buffer that does not have to be changed if the orientation is rotated, or the image sensor transfers extra bytes of dead data per row that are then ignored instead of doing a second transfer into a buffer with minimum bytes per row, or some other reason that would only make sense if we knew the inner workings of these devices.
It may slightly different because the internal memory allocation: "The number of bytes used in memory for each row of the specified bitmap image (or image mask)."1
Consider using NSBitmapRepresention for some special tasks.
Using GTK, how do I query the current screen's dpi settings?
The current accepted answer is for PHPGTK, which feels a bit odd to me. The pure GDK library has this call: gdk_screen_get_resolution(), which sounds like a better match. Haven't worked with it myself, don't know if it's generally reliable.
The resolution height and width returned by screen includes the full multi-monitor sizes (e.g. combined width and length of the displayer buffer used to render multi-monitor setup). I've not check of the mm (millimeter width/height) calls returns the actual physical sizes but if it report combined physical sizes then the dpi computed from dividing one with another would be meaningless, e.g. to draw a box on screen that can be measured using a physical ruler.
See GdkScreen. You should be able to compute it using the get_height and get_height_mm or with get_width and get_width_mm.