Change in Google Earth Collada Support with Version 7.1 - google-earth

I have an application that creates Collada Models in Google Earth that worked fine in version 7.0 but with 7.1 the colors stored in a 24bit PNG texture are no longer displayed. The model displays but no colors. I tried switching to a jpg (supposed to be allowed in Collada) but that didn't work either. Anyone know what the problem might be?
Additional: I just discovered that if you unzip the KMZ file my model will display with the colors from my texture. However if I rezip with a different zip program (7zip) the problem returns.

Google has acknowledged that its a bug in 7.1:
"We can confirm the loss of textures in GE 7.1.1.1580 (beta) for images embedded in the kmz. If 7.1 is required, workarounds include reference to a network, web or local path outside the kmz (as you have found). We think this is a glitch in the beta, relating to extraction of the textures from a zip archive, and expect to see a fix in a later update.
For example, change the in your to access an alternate image outside of the kmz:
c:/test/sample.png
sample.png
When you Save Place As (*.kmz), GE 7.1 still copies the image into the zip archive and writes sample.png to doc.kml, which is a good sign.
-wxazygy"

Related

Including non-standard resources in Unity HoloLens app

I'm building an app that must visualize a large point cloud on HoloLens 1st gen. As performance is an issue wit large clouds, I'm using Potree, an octree that takes care that only a preset number of points from the cloud are rendered.
The solution works in the editor, but, you guessed it, not when deployed on HL.
The point cloud in the Potree format is a set of couple of .json and hundreds of .bin files stored in hundreds of subfolders following the octree structure, all of that stored within a single folder, and the path to this folder is accessed by the renderer at runtime. However, I don't know how to include this folder in the HL app. Using Resources doesn't work as it's not really a standard resource. I've seen Asset Bundle suggested elsewhere, but according to this post asset bundling doesn't work on HL.
Is there a way to simply put this complex file structure in an accessible directory on HoloLens?
I feel completely stuck here and any help would be much appreciated.
Some of the things I've tried:
Keijiro Pcx doesn't work here. If rendered as single pixels, points cannot be seen in AR, and if rendered as meshes, the performance is abysmal (which led me to a conclusion octree structure should be used)
the solution here shows how to load one .xml file, but I have hundreds of files so I don't think it would work for me
similarly, this post deals with one .obj file
Unity 2019.4
HoloLens 1st gen
For anyone stumbling upon this - I ended up using Unity StreamingAssets and accessing the folder with Application.streamingAssetsPath - works beautifully!
Using pcx needs to be adjusted to binocular rendering in publishing settings. Please uncheck "Enable Depth Buffer Sharing" in XR Settings, and change "Single Pass" to "Multi Pass", as shown in the figure.enter image description here

Import .ai (illustrator file) in Unity and display them

I want to load an illustrator file in my game. Unity should recognize different layers, colors, and forms, and layers with text and display them in a 2d canvas.
The goal is that the players can click on different forms and that unity recognize them as individual forms. Do you know any unity asset or a way to make this possible?
For example when you import an image like this as an illustrator file -> https://www.mandala-bilder.de/mandala/erwachsenemandalas/mandala-ideen-erwachsene.pdf
I thought about an SVG file but then I canĀ“t use the different layers.
Illustrator has a proprietary file format, it has no publicly available documentation for newer versions. While you can dig out old specifications (this is why some programs only support AI files saved in ancient versions) http://www.idea2ic.com/File_Formats/Adobe%20Illustrator%20File%20Format.pdf I do not think you can just go in and start supporting a 2021 variant without requesting (and motivating) the spec from Adobe. They might also want to charge you for it.
SVG on the other hand is free and it's spec is public so there is much widely spread support. also SVG supports groups which can get your around the need for layers
Vector Express is a free conversion API you should be able to use. (requires a network connection, though)
https://github.com/smidyo/vectorexpress-api
You should be able to POST a request (https://docs.unity3d.com/ScriptReference/Networking.UnityWebRequest.Post.html) to this endpoint, with the raw AI file as the body:
POST https://vector.express/api/v2/public/convert/ai/gs/pdf/psd2svg/svg/
This will return a JSON object with a link to an SVG file that you can then download and display.

Alfresco PDF thumbnail previews unreadable

Not sure this is the right stackexchange site but seems to be the place with the most question about Alfresco I can find so here goes.
Have Alfresco Community Edition 4.2.d installed on a RHEL5 64bit box (mainly default install bar using MySQL as a database locally). Uploading PDFs to the documentLibrary is fine and thumbnail previews and flash previews are generating. If the PDF has been processed by ABBYY OCR (which we have running on a separate server and is used to OCR scanned PDFs) then the flash preview generates fine but the thumbnail is incredibly dark and looks as if it has been attacked by a can of spray paint.
I initially thought it could be a ghostscript issue but have updated that to 9.14 and still getting this issue. I have also tried playing around with ImageMagik but I can't get a nice clear thumbnail to generate. I am guessing it is a switch in the convert command that Alfresco is using but I am struggling to work out a combination of switches that will work and then where Alfresco would store these parameters. Or indeed what switches are currently being used.
I was wondering if anyone had seen this behaviour before with ImageMagik previews in Alfresco 4.2.d? It seems something unique to PDFs that have been through the OCR process so I am guessing I will need to create a separate transformation for them at a later stage.
EDIT: So it was suggested that a later version of ImageMagick and GS should resolve it. I have therefore installed GS 9.14 and IM 6.8.9-0 (both compiled form source). Running the following from a command line:
convert /root/test1.pdf[0] /root/test1.png
results in a crystal clear image thumbnail preview. Thinking I was on to a winner I have amended the following lines in alfresco-global.properties to point to the system location of GS and IM:
img.root=/usr
img.dyn=${img.root}/lib
img.exe=${img.root}/bin/convert
img.gslib = /usr/local/share/ghostscript/9.14/lib/
and alfresco loads. However the thumbnail preview generated by Alfresco using the new version of IM and GS does not result in nice clean previews.
I am guessing that Alfresco is passing some command line switch during the conversion that is undoing the good work of the later versions of these programs. Does anyone know where the switches for thumbnail creation might be stored in Alfresco?
I guess it's related to transparency and default background black. I didn't find an easy way to add the required parameters to the script except to register a new transformer supporting more parameters like:
-fill white -opaque none

interactive pdf on the iOS

I have been looking for a way to present an interactive pdf file (created by in-design) on
the iPhone. I read a bunch of questions here but none says how to do it. The pdf file contain the text and in the middle it contains a 3d module, but when I present it on the iPhone it shows only the text and an empty white box where the module should appear.
Is it even possible to do it?
I'll be glad for any assistant on this subject or even where to look.
Thanks in advance,
Shahar.
Apple's PDF parser does not support 3D stuff. You're better of implementing the 3D part yourself and just adding that as a UIView on top of the PDF. There are several PDF frameworks that help with that (see https://stackoverflow.com/questions/3801358/pdf-parsing-library-for-ios)
Another alternative might be licensing Adobe's iOS rendering engine. But I doubt that they already added 3D support (or that they will be). Also, from what my sources tell me, pricing is rather high and apparently the framework not very developer friendly. (But I haven't used it myself)

High level process of extracting images from a container

Right, this is the problem I have a container (rar,zip) which contains images png's tiffs bmps or jpegs in an order.
The file extension isnt zip or rar though but uses the same compression.
I want to pull out a list of images contained within the file in the numerical order, then depending on the user decision go to the image selected.
I'm not after any code just the high level thought process/logic of how this can be achieved and how it could be achieved on iphone OS.
From what i know of iphone OS it uses a kind of sandbox environment so how would this effect the process as well.
Thanks
You can include the libz framework in your project and write some C to manage zipped data. Or you can use Objective-C wrapper classes others have written.
Your application resides in its own sandbox. You can include zip files in the "bundle", i.e. add them to your project, and copy them to the application's Documents folder to work with them. Or you can copy archived data over the network to the application's Documents folder if you don't want to include files in your project.
I don't think the extension matters so much as the data being in the format you expect it to be.
Everything I wrote above is for zip-ped files. If you're working with rar-formatted archives, you'll need to look at making a static library for the iPhone, perhaps from the UnRAR source code.