I'm creating a simple program in Racket that imports two bitmaps and exports them in a single image. I'm having an issue with the pixel density on my MacBook because the images are non-retina. For my image processing, I'm using the 2htdp/image library.
Is there a way to set the pixel density of my racket program?
The line that exports the image is:
(save-image final-image "final.png" WIDTH HEIGHT)
I'm trying not to include too much information, but if there's anything I can add (more code, for example) to make my question more clear, please let me know.
P.S: Processing approaches this problem in the following way:
https://processing.org/reference/displayDensity_.html
This is not a complete answer, but perhaps it will help you to get started.
First, you say "the images are non-retina". This might be a misconception.
The word "retina" is used to describe the resolution of the screen, you happen
to be using (roughly the screen is "retina" if the screen pixels are so small your
eye can't see individual dots).
However, my guess is that when you draw the loaded image on screen, it
is shown at half the size, you are expecting?
The reason for that is found in section "1.8 Screen Resolution and Text Scaling"
in the docs for gui has the following to say:
On Mac OS, screen sizes are described to users in terms of drawing
units. A Retina display provides two pixels per drawing unit, while
drawing units are used consistently for window sizes, child window
positions, and canvas drawing. A “point” for font sizing is equivalent
to a drawing unit.
One solution is to scale the loaded image to double the size:
(scale 2 the-loaded-image)
before drawing it.
Finally, how can a program know whether the current display is a retina display?
The function get-display-backing-scale is what you need:
(require racket/gui/base)
(get-display-backing-scale)
It will return 2.0 if the screen is retina, otherwise 1.0.
If you have more than one monitor, lookup the function
in the docs to see details on handling that.
Related
I'm using MapBox's static API in a project. I have managed to load maps with the same width and height in terms of latitude and longitude regardless of the resolution. This is so that users see the same area regardless of their screen resolution, for example. The problem is that on larger resolutions features and —especially, text appear much smaller, relatively. For example, these two maps look very similar, except for the size of text (and some other details, like the thickness of lines):
https://api.mapbox.com/styles/v1/mapbox/outdoors-v11/static/0.63189425,46.195750258333334,14.3/540x285#2x?access_token=ACCESS_TOKEN
https://api.mapbox.com/styles/v1/mapbox/outdoors-v11/static/0.63189425,46.195750258333334,13.15/240x126#2x?access_token=ACCESS_TOKEN
Is there a way to compensate for this and have the text on the larger image print larger, and have thicker lines? (in pixel terms). The result being that, say, two 6 inch screens print text in the same real-world size (centimeters) regardless of their pixel count.
I have looked into layers and filters, but it does not seem like there is a straightforward way of achieving this. It looks like maybe designing new maps would be the way to go, but I'm using the default ones and I would not know where to start.
THank you
I'm a bit confused by the premise of your question here. The API's #2x parameter is meant to toggle resolution and should serve exactly the purpose you describe. The reason for different amounts of label information being included in the images you've shared is because you've used different zoom values (13.15 vs. 14.3) and the labels in Mapbox core styles are zoom-dependent, meaning they change based on the zoom value used to generate the map.
With a fixed image width, and no #2x parameter:
/styles/v1/mapbox/outdoors-v11/static/0.63189425,46.195750258333334,14.3/540x285?access_token=ACCESS_TOKEN
yields
With a fixed image width, and a #2x parameter:
yields
/styles/v1/mapbox/outdoors-v11/static/0.63189425,46.195750258333334,14.3/540x285#2x?access_token=ACCESS_TOKEN
⚠️ Disclaimer: I currently work for Mapbox ⚠️
In Unity, when writing shaders,
is it possible for the shader itself to "know" what the screen resolution is, and indeed for the shader to control single physical pixels?
I'm thinking only of the case of writing shaders for 2D objects (such as for UI use, or at any event with an ortho camera).
(Of course, normally to show a physical-pixel perfect PNG on screen, you merely have a say 400 pixel PNG, and you arrange scaling so that the shader, happens to be drawing to, precisely 400 physical pixels. What I'm wondering about is a shader that just draws, for example a physical-pixel perfect black line - it would have to "know" exactly where the physical-pixels are.)
There is a ShaderLab built-in value called _ScreenParams.
_ScreenParams.x is the screen width in pixels.
_ScreenParams.y is the screen height in pixels.
Here's the documentation page: http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
I don't think this is going to happen. Your rendering is tied to current selected video mode and it doesn't even have to match your physical screen size (if that is what you mean by pixel-perfect).
The closest you are going to get with this is if you render at recommended resolution for your display device and use pixel shader to shade an entire screen. This way, one 'physical pixel' is going to be roughly equal to one actual rendered pixel. Other than that, it is impossible to associate physical (that is your display's) pixels to rendered ones.
This is unless, of course, I somehow misunderstood your intentions.
is it possible for the shader itself to "know" what the screen resolution is
I don't think so.
and indeed for the shader to control single physical pixels?
Yes. Pixel shaders know what pixel they are drawing and can also sample other pixels.
First of all, please define 'Pixel perfect' and 'Physical pixel'.
If by physical pixel you mean your display's pixel (monitor, laptop display, any other hardware you might use) then you are out of luck. Shaders don't operate on those, they operate on their own 'abstract pixels'.
You can think about it in this way:
Your graphics are rendered in a picture with some configurable resolution (say 800x600 pixels). You can still display this picture on a 1920x1080 display in full screen no problem, it would look crappy though. This is what's happening with actual display and video card rendering. What determines the actual amount of rendered pixels is your video mode (picture's resolution in the above example). And physical pixels are your display's pixels. When rendering you can only operate on the first kind.
This leads us to a conclusion that when you render the graphics at the exact same resolution as your display's native resolution, you can safely say that you endeed render it as 'Physical Pixels'.
In unity, you can pass the renderer some external data (this might include your current screen resolution (for example as a Vector2, see this).
However you most likely don't need any of this, since pixel shaders already operate on pixels (rendered pixels, determined by your current video mode). That means that if you use some resolution which is lesser than your native one, you most likely will not be able to render a single pixel.
Hope it helped.
The problem I am trying to solve is:
I have 6 stripes which I need to move at different speed. A texture sheet of 2048*2048 is not enough and to deal with this I splitted the image in two (top and bottom half), so each stripe is exactly 960*640pixels. The general algorithm is to allocate a top and bottom half for each stripe and move them at each frame making sure to reposition them at the top of the screen when they exit the user's view. My class implementation, a direct modification of ParallaxBackground in the ShootEmUp example from this book, is giving too many memory warnings when run and analyzed using Instruments. See analysis below:
OpenGL analysis:
Activity monitor:
What concerns me is the high number of memory warnings in both analysis (24 and 5 respectively).
EDIT: Below you can find a comment which explains the solution
2048x2048 is maximum possible size of texture for new devices. you can read about it in Apple OpenGL ES Programming Guide
Remember that Cocos2D always saves images with a width/height being a power of 2. So if your image is 960x640 pixels it'll use memory as if the image is 1024x1024 pixels.
Also remove textures you no longer need (and when outofmemory gets called).
[[CCTextureCache sharedTextureCache] removeUnusedTextures];
You can also use images in a lower quality to save memory.
[CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA4444];
Whenever you need to load higher quality images or gradients you can put it back.
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];
Following the comment-suggestion to my question by LearnCocos2d the correct answer-solution was to simply reboot the device.. (see his comment above).
Thanks!
I have an image with the dimensions of 5534 × 3803, and size of 2.4mb. The UIView references notes that:
"In iOS 3.0 and later, views are no longer restricted to this maximum
size but are still limited by the amount of memory they consume."
When the image loads, it lags for half a second, then slides in. The image sits in the UIImageView at 1024x704, but can be scaled up to 4x that size for the purpose of my app.
Are you able to preload the image in the AppDelegate? Or is there another way of working around having such a large image?
Thanks
EDIT: The scaling is done via UIPinchGestureRecognizer, and scales up and done (scale x4 - x1) based on the image's center point. There is no panning of the image when zoomed in.
Personally, I would try to write a tile-based system (think Google Maps) that slices your big image into a grid of small images to avoid loading in that gigantic image all at once into RAM. I don't really know what your user interactions are for this image, or whether the images are changing or baked into your project, but I'd assume you can let users scroll around since that image is bigger than any iOS screen. With a tile-based system, you only load the images that are on-screen. CATiledLayer is an Apple class for doing just such a thing. That's probably what you want to look into.
See this StackOverflow question for some different approaches. The accepted answer uses code from Apple's sample PhotoScroller project, which may work for your needs and uses CATiledLayer.
This ScrollViewSuite Apple code might also get on your way (check out the Tiling code).
I'm writing a small program to convert a standard definition 4:3 video to a hi-def video 16:9 and I'm experiencing a serious stretching effect, as expected I suppose (though I didn't think about it until my code started working). Anyhow, the only way I can think of getting around this stretching effect and still fill the whole 16:9 screen is to cut off the top and bottom of the image.
1) So my question is, when converting from SD to HD, do I have to lose image parts of the image in order to fill the whole screen without any stretching effects?
2) Same question for converting from HD to SD.
I'm new image processing, are there any popular approaches to reducing the stretching in these kinds of operations? Is there a smarter approach to this problem than just cutting off parts of the image or introducing black bars to the image?
Thanks in advance for all your help!
Other than the obvious methods of cropping, letterboxing, and pillarboxing, which either lose image data or necessitate potentially-undesirable black bars, there is also adaptive image resizing. Basically, the intent of these techniques is to be able to create a version of an image with an arbitrary aspect ratio, without losing the essential characteristics of the image or distorting it. One technique is called seam carving, and can be seen here.
If you'd like to test the technique out on some images of your own, the functionality is included in recent versions of ImageMagick, as explained here.
Reduction of quality or loss of content is always a problem in resizing images or video. Generally you scale the image in one direction, and either trim or pad the other direction.
On TV it is common to cut off the left and ride side of a 16:9 frame to put is on 4:3 screen, and to add black side bars to go from 4:3 to 16:9. TV editors don't cut off the top and bottom of a 4:3 frame to fit it on 16:9 because there's almost always important parts of the scene there. The far left and far right of a 16:9 frame don't usually have important elements, although in some cinematic scenes losing the sides makes a huge difference.