Free satellite image for image segmentation testing - image-segmentation

I need some free satellite images (panchromatic) to test some segmentation algorithms.
I have searched the web but I didn't find any resources our websites that provide free satellite images.
Thanks.

You can fill this form and request for a free samples from Geoeye of IKONOS imagery.
http://www.geoeye.com/corpsite/solutions/learn-more/sample-imagery.aspx
or alternatively GLCF has 4 IKONOS images samples that can be used. You can download them from here:
http://glcf.umd.edu/data/ikonos/
Note: IKONOS 2 has 4 multi-spectral bands of 4m resolution and 1 panchromatic band of 1m resolution. You can just use the Panchromatic band if that is your requirement.

General search engines of satellite granules
You can search and download Landsat images from Earthexplorer or Glovis services. You'll find several collections there.
Landsat panchromatic
Landsat ETM+ provide a panchromatic image at band 8 with 15m resolution. One example here, from WRS-2 Path 29 row 30.
LDCM misson
Be aware that Landsat LDCM mission scenes will be available soon at the same website, by the end of May.

Related

Feed multiple images to CoreML image classification model (swift)

I know how to use the CoreML library to train a model and use it. However, I was wondering if it's possible to feed the model more than one image in order for it to identify it with better accuracy.
The reason for this is because i'm a trying to build an app that classifies histological slides, however, many of them look quite similar, so I thought maybe I could feed the model images at different magnifications in order to make the identification. Is it possible?
Thank you,
Mehdi
Yes, this is a common technique. You can give Core ML the images at different scales or use different crops from the same larger image.
A typical approach is to take 4 corner crops and 1 center crop, and also horizontally flip these, so you have 10 images total. Then feed these to Core ML as a batch. (Maybe in your case it makes sense to also vertically flip the crops.)
To get the final prediction, take the average of the predicted probabilities for all images.
Note that in order to use images at different sizes, the model must be configured to support "size flexibility". And it must also be trained on images of different sizes to get good results.

Compress UIImage by reducing bits per pixel in Swift [duplicate]

This question already has an answer here:
Convert an image to a 16bit color image
(1 answer)
Closed 4 years ago.
I'm trying to obtain a PNG image with a resolution 512px x 512px smaller than 100 kB.
At the moment the files are around 350 kB. I'm trying to reduce the file size and a way I was thinking is to reduce the Color-depth.
This is the information of the PNG images:
bits per component -> 8
bits per pixel -> 32
I wrote some code to create the CGContext with a different bits per Pixel, but I don't think that's the write way.
I don't want to use the UIImage.jpegData(compressionQuality: CGFloat) since I need to maintain the alpha channel.
I already found some code in Objective-C but that didn't help. I'm looking for a solution in Swift
You would need to decimate the original image somehow, e.g. zeroing some number of the least significant bits or reducing the resolution. However that completely defeats the purpose of using PNG in the first place, which is intended for lossless image compression.
If you want lossy image compression, where decimation followed by PNG is one approach, then you should instead use JPEG, which makes much more efficient use of the bits to reproduce a psycho-visually highly similar image. More efficient than anything you or I might come up with as a lossy pre-processing step to PNG.

guidedFilter using OpenCV

I tried to use guided filter( in OpenCV it's named guidedFilter ) to make a edge-preserving filter.And the guid image I used is as the same as the input image. I am not familiar with the choice of guided image.So anyone could give me some advice on it? thanks!
You can use practically everything as a guide. In raytracing it usually makes sense to use normal information and depth information as a guide, so the guide would be an RGBD-Image. OpenCV 3.0 seems to have a limit of 3 channels, which means you can only use an RGB-Image as a guide, where RGB corresponds to any kind of information not just red green blue color.
Here is one of the original papers on Guided Image Filtering if you do have some technical knowledge: He: Guided Image Filtering

Find the size of 1 pixel in my CMOS camera

I have a small problem with finding the pixel size of an image. I am to find size of nano and micro particles on my BW image. I used regionprops to get the area - then the diameter. Now i know the value in pixels. How do i convert to micro or nano meter scale? Do I take into account the sensor size(6.5umx6.5um) of my camera?
I use MATLAB for image processing.
Thank you
there is a function called imfinfo which will return a struct. In this struct you will maybe find three fields (it depends on the coder that you used for the image format) called XResolution, YResolution and ResolutionUnit. Using this 3 fields you can easily get pixel size, for example if XResolution=10, YResolution=10 and ResolutionUnit='meter' then you have a 100cm2 pixels (its a bit unreal i know :))
I hope this helps and that your image file contains the XResolution and YResolution information in your header.

BMP image header - biXPelsPerMeter

I have read a lot about BMP file format structure but I still cannot get what is the real meaning of the fields "biXPelsPermeter" and "biYPelsPermeter". I mean in practical way, how is it used or how it can be utilized. Any example or experience? Thanks a lot
biXPelsPermeter
Specifies the horizontal print resolution, in pixels per meter, of the target device for the bitmap.
biYPelsPermeter
Specifies the vertical print resolution.
Its not very important. You can leave them on 2835 its not going to ruin the image.
(72 DPI × 39.3701 inches per meter yields 2834.6472)
Think of it this way: The image bits within the BMP structure define the shape of the image using that much data (that much information describes the image), but that information must then be translated to a target device using a measuring system to indicate its applied resolution in practical use.
For example, if the BMP is 10,000 pixels wide, and 4,000 pixels high, that explains how much raw detail exists within the image bits. However, that image information must then be applied to some target. It uses the relationship to the dpi and its target to derive the applied resolution.
If it were printed at 1000 dpi then it's only going to give you an image with 10" x 4" but one with extremely high detail to the naked eye (more pixels per square inch). By contrast, if it's printed at only 100 dpi, then you'll get an image that's 100" x 40" with low detail (fewer pixels per square inch), but both of them have the same overall number of bits within. You can actually scale an image without scaling any of its internal image data by merely changing the dpi to non-standard values.
Also, using 72 dpi is a throwback to ancient printing techniques (https://en.wikipedia.org/wiki/Twip) which are not really relevant in moving forward (except to maintain compatibility with standards) as modern hardware devices often use other values for their fundamental relationships to image data. For video screens, for example, Macs use 72 dpi as the default. Windows uses 96 dpi. Others are similar. In theory you can set it to whatever you want, but be warned that not all software honors the internal settings and will instead assume a particular size. This can affect the way images are scaled within the app, even though the actual image data within hasn't changed.