A friend of mine inherited (don't ask about the specifics here) a documentation on Google Earth which incorporates a lot of images. Those where on a server and accessed from there.
Now the server has been shutdown, so the web-links are gone. Nevertheless the images are still available as the server data has been secured. The links in GE are now marked as invalid, and I can see that there are web-links in the form of https://domain.tld/directory/image.jpg in the app.
So I am looking for a solution to extract the data to be able to replace the https://domain.tld/directory-part, replace it with an appropriate local directory (C:\directory\) and then reload it back into the GE.
Or is there any internal function/tool available in GE?
(IT-knowledge to a certain extent to make conversions is available.)
If by "documentation on Google Earth" you mean a KML file, then yes, you should be able to update the URLs relatively easily. a KML file is just an XML text file, so you can open it up with any text editor. If you use a full featured text editor then you can do a find/replace on the "https://domain.tld/directory/" part, and replace it with something that looks like: "file:///C:/directory/".
Where you find the URLs will depend on whether the images are used as ground overlays, icons, content in balloons associated with placemarks, etc.
Related
First, thanks for any and all help regarding this topic.
Sites like Facebook and Twitter strip EXIF information from images as they are uploading. My goal is to allow users to upload images to our platform (working with Nextcloud and others) with full EXIF information, however, we need to display images that do not contain EXIF information or any metadata. Without stripping and creating a second, Exif-Free image for each, is it possible to simply hide that EXIF info so that, if a user downloads that image, the EXIF is not embedded?
We were told that the only way to do this is to have a second, exif-free copy (the order of when that's created is irrelevant pre/during/post upload). I'm hoping there's a way that we can simply display such a copy without doubling our physical space requirements.
Thanks again for your help.
Exif is metadata, along with IPTC, XMP, AFCP, ICC, FPXR, MPF, JPS and a comment, just for the JFIF/JPEG file format alone. Other picture file formats support even more/other metadata.
You wrote it yourself: a download - so it's a file in any case. Pictures are files, just like executables, movies, texts, music and archives are files, too. And metadata is part of its content, so whoever accesses the raw bytes of the file can grab everything in it. Which is not "please don't look" proof. If you
create that on the fly by stripping metadata everytime a download is requested,
or if you do it once to preserve performance and instead occupy space remains your decision.
If there would be something as simple as a "don't show" feature then it would still be in the file and could be extracted easily by software written to ignore that instruction. Seriously, there's no shortcut to that - do it properly and don't spare yourself from getting work done at the wrong end.
I've been playing around with making a draftjs plugin that lets the user paste in mixed text&image content from websites and have images auto-uploaded to the server. I've quickly come to the realization that it's not easy, simply because of how many different sites use different kinds of counter-measures for copy/pasting images. Standard image tags in page content are no problem - easily grab the src and handle the file upload from the url. However, many sites use all kinds of trickery to make this a pain. For example, some will only serve small thumbnails, requiring a GET request on the image with a hash key in order to retrieve a larger version. Others somehow seem to corrupt the image so that it's unreadable by the time it's been retrieved. Others still play with weird embed tags to mess with draftjs' image blocks.
But then I open up a Google Docs file, and find that when I copy any images into that from a website, there's never any troubles whatsoever. All the problematic websites that I'm finding myself having to write specific methods for retrieving from seem to be handled by Google Docs with ease.
Am I using completely the wrong approach by trying to retrieve images from a url? Does Google use a far superior approach (yes, I presume) - in which case, does anyone have any idea what that approach might be?
I have a presentation that I would like to publish. I am pretty sure most of the images are CC-By-SA, but I would like to make sure.
Is there a tool that:
exports all images in an .odp-file
searches for these on Google Images
finds the license+attribution for these images or at least finds the URL where the images are
You can get all the images by unzipping the .odp file and looking in the resulting Pictures subdirectory. However it looks like the filename is not the original name, so I do not know how you could use this for searching.
If you are interested in writing macros, have a look at http://forum.openoffice.org/en/forum/viewtopic.php?f=45&t=64969. Again I do not think you can determine the original filename this way. If your presentation contains captions that tell the name of each image then it may be possible to search based on those names.
I hope someone can help or at least point me in the right direction. I'm currently building an iPhone application that requires the input of a zip code to find a location. For example - if the user opens the app the first thing they see is a text box that requires them to input a 5 digit number (zip code) to find various businesses near that location. There is also a slider bar that has 5mi to 100mi radius. So, once the zip code input and the user has selected how much mileage and pressed the submit button, it should show the lists of businesses that are local to that area. Does that make sense?
Thanks a bunch everyone.
First of all zip code does not map specifically to one location . It may refer to disjoint sets of location as well (this happens in India ) One zip code like 400093 refers to diffrent place in India and might refer some different location in Korea . There is no central server which could provide you with these specification. check for google API (check here for google API) to look for alternatives .
For your use case I would suggest you to look Four square API. get user co-ordinates , use geo location to detect their place or directly call four square api to get the useful information.
FourSqaure API
You can do this in your app, or make a web-service that will get the imput zip code and return the region.
It's a huge work. In couple of words:
Make a plist file that will have as keys all the zip codes you can load into it. Take a look here, they are not few. And set the values as the region names.
Once you have the zip code, you can find the region. Then, depending on the user's selected range, you can present the closest regions.
Also look here, for detailed explanation of zip codes' formatting. You will have to learn a bit in order to be able to detect closest regions of a selected zip code.
As a fast, but not secure way - you can find a free web service that does the job (takes zip, gives region), without building your own server-side (or method in the app), but as I said, it's not secure.
Work.
I am doing some updates to a site I have developed over the last few years. It has grown rather erratically (I tried to plan ahead, but with this site it has taken some odd turns).
Anyway, the site has a community blog ( blog.domain.com - used to be domainblog.com) ) and users with personal areas ( user1.domain.com, user2.domain.com, etc ).
The personal areas have standard page content that the user can use, or add snippets of text to partially customize. Now the owner wants the users to be able to create their own content.
Everything is done up to using a file browser.
I need a browser that will allow me to do the following:
the browser needs to be able to browse the common files at blog.domain.com/files and the user files at user_x.domain.com/files
the browser will also need to be able to differentiate between the two and generate the appropriate image url.
of course, the browser access to the user files will need to be dynamic and only show those files particular to the user (along with the common files)
I also need to be able to set a file size for images
the admin area is in a different directory than either the blog or the user subdomains.
general directory structure
--webdir--
|--client --
|--clientsite--
|--blog (blog.domain.com)
|--sites--
|--main site (domain.com)
|--admin (admin.domain.com)
|--users--
|--user1 (user1.domain.com)
|--user2 (user2.domain.com)
...etc.
I have tried several different browsers and using symlinks but the browsers don't seem to be able to follow them. I am also having trouble even setting them to use a directory that isn't the default.
what browser would you recommend? what would I need to customize to make it work.
TIA
ok, since I have not had any responses to this question, I guess I will have to do a work around and then see about writing a custom file browser down the road.