Using GTK, how do I query the current screen's dpi settings?
The current accepted answer is for PHPGTK, which feels a bit odd to me. The pure GDK library has this call: gdk_screen_get_resolution(), which sounds like a better match. Haven't worked with it myself, don't know if it's generally reliable.
The resolution height and width returned by screen includes the full multi-monitor sizes (e.g. combined width and length of the displayer buffer used to render multi-monitor setup). I've not check of the mm (millimeter width/height) calls returns the actual physical sizes but if it report combined physical sizes then the dpi computed from dividing one with another would be meaningless, e.g. to draw a box on screen that can be measured using a physical ruler.
See GdkScreen. You should be able to compute it using the get_height and get_height_mm or with get_width and get_width_mm.
Related
In Marks, click Size and there pops a slider where I can adjust the size of a shape. But how to accurately control the size, is there some property with numbers to accurately control it? I have two sheets to show something similar and I want to display exactly the same sized shapes.
If you want to ensure 'sizes' are the same across two worksheets, I'd suggest snapping the 'size' setting to the center on both, as this is the easiest option to select. You can then use a measure to set the size, if this is desirable, and then the difference in size will be relative on both worksheets.
There isn't a numerical value override for the size slider.
Ben is correct, there isn't yet a numerical value override for the slider. You can use parameters with Min/Max/Sum etc. and a variable to somewhat change the sizes but they have to have multiple entries per line. It is unfortunate that Tableau still doesn't get that people want both a 'relative' sizing system that uses numbers from the dataset and a 'static' sizing system that allows for shapes to be set to '11px' or something along those lines. Yes, you can control that kind of in the dashboard with a vertical and fill entire box etc; but that doesn't address the very real scenario where you want a user to be able to re-size on the fly. Just my two cents.
I ran into this today. Very annoying. Need to keep shapes the same size across all worksheets and therefore same on dashboard.
I have read a lot about BMP file format structure but I still cannot get what is the real meaning of the fields "biXPelsPermeter" and "biYPelsPermeter". I mean in practical way, how is it used or how it can be utilized. Any example or experience? Thanks a lot
biXPelsPermeter
Specifies the horizontal print resolution, in pixels per meter, of the target device for the bitmap.
biYPelsPermeter
Specifies the vertical print resolution.
Its not very important. You can leave them on 2835 its not going to ruin the image.
(72 DPI × 39.3701 inches per meter yields 2834.6472)
Think of it this way: The image bits within the BMP structure define the shape of the image using that much data (that much information describes the image), but that information must then be translated to a target device using a measuring system to indicate its applied resolution in practical use.
For example, if the BMP is 10,000 pixels wide, and 4,000 pixels high, that explains how much raw detail exists within the image bits. However, that image information must then be applied to some target. It uses the relationship to the dpi and its target to derive the applied resolution.
If it were printed at 1000 dpi then it's only going to give you an image with 10" x 4" but one with extremely high detail to the naked eye (more pixels per square inch). By contrast, if it's printed at only 100 dpi, then you'll get an image that's 100" x 40" with low detail (fewer pixels per square inch), but both of them have the same overall number of bits within. You can actually scale an image without scaling any of its internal image data by merely changing the dpi to non-standard values.
Also, using 72 dpi is a throwback to ancient printing techniques (https://en.wikipedia.org/wiki/Twip) which are not really relevant in moving forward (except to maintain compatibility with standards) as modern hardware devices often use other values for their fundamental relationships to image data. For video screens, for example, Macs use 72 dpi as the default. Windows uses 96 dpi. Others are similar. In theory you can set it to whatever you want, but be warned that not all software honors the internal settings and will instead assume a particular size. This can affect the way images are scaled within the app, even though the actual image data within hasn't changed.
We're creating an iOS photo app. In doing this, we have to create dynamically sized images up to about 2500x1600px. Once this image has been created, we want to draw smaller images on top of the big one reasonably quickly.
The problem as we can see it is that it's impossible to get a context larger than the screen resolution. The call does not crash, but it returns a nil-context. How can such a seemingly simple task be achieved?
Secondly, once this context is created, what is the fastest way to draw a small image at a given position on top of the big one?
Edit:
We found the solution. CGBitmapContextCreate returns nil because the width and height parameters were set as floats, not ints. Sometimes the solution is right there in front of you, and you're too blind to see it. Hopefully this answer can help other people that somehow have the same problem.
Make sure you specify integer widths and heights as the arguments to CGBitmapContextCreate, otherwise it returns nil. Otherwise, the size of the context should not matter as long as you can malloc enough memory for it.
It should be possible to get a context for almost any bitmap for which you can allocate/malloc enough memory, in your case multiples of 2500x1600x4 bytes of ARGB pixels.
You might also want to look into using a CATiledLayer, where you would only have to draw into the tiles covered by the smaller image. You may have to tile to support older devices which are limited by the max tile size that will fit into the GPUs texture cache.
I'm using FlowCoverView, an open source (and AppStore compliant) alternative to Apple's cover flow (you can find it here http://chaosinmotion.com/flowcover.m)
How can I change the tile (or texture as it's called in the library) size (statically at least)?
The code comes with a statically fixed 256x256 tile size. This is fixed using the TEXTURESIZE define and hard coding the number 256 within the code.
I tried to replace all 256 occurrence with the TEXTURESIZE define and it works... As long as the define is set to 256! As soon as I put another value, I get white 256x256 images in the flow view (I pass the correctly dimensioned UIImages through the delegate of course) :(
I can't find where in the code this 256 value is used. I repeat: all 256 occurrences were replaced by the TEXTURESIZE constant.
PS
Of course, the next step to improve this nice library will be to make the TEXTURESIZE a property, to be set at runtime...
Use this. It is much easier to implement then yours.
https://github.com/lucascorrea/iCarousel
Change the TEXTURESIZE to 512 . clean the build and run it
I'm using FreeType2 for font rendering, and I need to get a global bounding box for all fonts, so I can align them in a nice grid. I call FT_Set_Char_Size followed by extracting the global bounds using
int pixels_x = ::FT_MulFix((face->bbox.xMax - face->bbox.xMin), face->size->metrics.x_scale );
int pixels_y = ::FT_MulFix((face->bbox.yMax - face->bbOx.yMin), face->size->metrics.y_scale );
return Size (pixels_x / 64, pixels_y / 64);
which works, but it's quite a bit too large. I also tried to compute using doubles (as described in the FreeType2 tutorial), but the results are practically the same. Even using just face->bbox.xMax results in bounding boxes which are too wide. Am I doing the right thing, or is there simply some huge glyph in my font (Arial.ttf in this case?) Any way to check which glyph is supposedly that big?
Why not calculate the min/max from the characters that you are using in the string that you want to align? Just loop through the characters and store the maximum and minimum from the characters that you are using. You can store these values after you rendered them so you don't need to look it up every time you render the glyphs.
I have a similar problem using freetype to render a bunch of text elements that will appear in a grid. Not all of the text elements are the same size, and I need to prerender them before I know where they would be laid out. The different sizes were the biggest problem when the heights changed, such as for letters with descending portions (like "j" or "Q").
I ended up using the height that is on the face (kind of like you did with the bbox). But like you mentioned, that value was much to big. It's supposed to be the baseline to baseline distance, but it appeared to be about twice that distance. So, I took the easy way out and divided the reported height by 2 and used that as a general height value. Most likely, the height is too big because there are some characters in the font that go way high or way low.
I suppose a better way might be to loop through all the characters expected to be used, get their glyph metrics and store the largest height found. But that doesn't seem all that robust either.
Your code is right.
It's not too large.
Because there are so many special symbols that is vary large than ascii charater. . view special big symbol
it's easy to traverse all unicode charcode, to find those large symbol.
if you only need ascii, my hack method is
FT_MulFix(face_->units_per_EM, face_->size->metrics.x_scale ) >> 6
FT_MulFix(face_->units_per_EM, face_->size->metrics.y_scale ) >> 6