CorelDraw's CDR to Fireworks? - import

I have the layout of a website in CorelDraw X4 and I need to move it to Fireworks CS5 (for many reasons). The thing is that, apparently, the only method I was able to find on the Internet doesn't work very well. What I do is to export the file from Draw to AI (Adobe Illustrator) format. Then I import the file in fireworks, but there, strange things happen. The first thing is that borders are thicker after this process (1 to 4) but the real problem comes with some objects thar are converted to bitmaps (or so I think). When I delete all the bitmaps, only a few objects remain and that's obviously undesired. In my original file I use transparencies and gradients applied to many different objects.
Do you know why this happens and/or a possible solution? Thanks!
Edit: I think I'm getting closer! Apparently AI format doesn't support transparencies, so... I get all trasparencies out before exporting (not very nice, but what can I do, right?) or I ungroup all objects once imported into Fireworks and then carefully delete the bitmaps (which seem to be the approximation of transparencies for AI). All this is just about testing, if someone knows what happens or of another solutions, please, thow light. Thanks...

Ok, as nobody else answered my question I suppose I can consider myself capable to provide more information than anybody else, ha!
I've been studing the case and reached to a semi-solution. Apparently, AI is the only format supporting vectors that can be exported and imported by both editors. The problem with this is that AI doesn't support transparencies nor shadows. So... If you really want to do this, be prepared to work a bit.
What I did was to copy all the shapes without effects using this export/import method (surprisingly, line thickness was preserved correctly this time), then I examined shape by shape in Corel and applied the same (or its best aproximation) effects in Fireworks. This wasn't easy because the way both programs apply shadows and transparencies is a bit different. Yeah, it's not easy, but it's all we got...
Little tip: In my case I had some shapes with transparencies AND shadows. In Corel these shadows where strong as if the object was solid (not transparent). In fireworks, the shadow disappears with the object when the transparency is applied (as logically expected). What I did to solve this was to copy the object and apply a Gaussian blur to the object in the back, acting as a full shadow even when the object in the front was fading to transparent.

Related

Some questions about the render order of the Uis in Unity3D

Found something beyond my understanding.
two images with a text overlapping one of them.
how they're ordered.
3 batches!
This is confusing me. As some official article I've read, "Unity UIs are constructed back-to-front, with objects’ order in the hierarchy determining their sort order. Objects earlier in the hierarchy are considered behind objects later in the hierarchy. Batches are built by walking the hierarchy top-to-bottom and collecting all objects which use the same material, the same texture and do not have intermediate layers."
In my understanding, the text should be rendered before the two images,and that the batches should be 2 rather than 3. So what's really happening here?
Because the text is rendering first, when the square renders, it goes over top of the already drawn pixels.
"As for batches, batching means that it takes all the same elements, that have the same texture and material, and tries to put them in 1 draw call if it can. Text, on the other hand, is always in a different atlas. So if you have an Image followed by Text , you'll have 2 draw calls as they cannot be batched together." - referenced here
You may need to enable static or dynamic batching in the Player Settings as well, if they are static, try marking them batching static in the inspector. I would refer to the link above for more of the reasons as to why batching may not work, as it seems there can be plenty of reasons!
EDIT: Are you using the default Unity text, or Text Mesh Pro? It will most likely work the same in terms of render orders and batches, but if you are not using TMP, you should be, as it is much more efficient, and renders better, and has more capabilities in general. There is a good reason Unity bought the product to adapt into their base engine.

How to increase the dataset?

I am doing a project on face recognition. I have a dataset containing image of 21 actors(each 150). Now I want to increase the no. of image of each actor to 300+ for the training purpose. How can I do it using MATLAB. Some solutions can be we can vary the contrast/brightness level of each image. But what are some other factors through which I can increase the no. of images.
One option is for you to flip the images: If a person is looking to the right, after the flip he will be looking to the left.
Further more, depending on your possible toolkit and set of skills, you could do some more advanced technique. If you can find some interesting characteristic from the pictures, like: eyes, nose, mouth, background. With those, you could make some intelligent transformations - swap peoples eyes, change the background, switch noses.
There are some particular objects of the faces which you could also distort - like the eyes and nose - stretch them. Maybe for bold guys you could built some synthetic hair, and so on...
You could do the contrast/brightness level change, but usually it doesn't do so well, as your features probably doesn't have (almost) anything to do with it, so it will just be a duplication of your data.
Anyway, as it's not a very large data set, if you don't have the set of skills to pull the more advances options I proposed, or the time to deal with it, you can make some of those stuff manually. It won't take you as much as you think. And usually, with that amount of data, this will give a good boost to your results.
What you are looking for is called "data augmentation". Common transformations are mirroring (flipping left / right side of the image) and rotation of the image. You might also be able to zoom (crop) a part of the image.
Maybe scaled versions with the rotated ones may help. If your features are not robust to the changes such as lightning contras etc you can modfy the images accordingly.

Is using MapServer to merge several MapLayers on runtime to use with Leaflet a good idea?

MY PROBLEM
We´re doing a Project right now where we have to display a huge image (containing chemical compounds and elements, so not geo referenced) as a map within a Web-Application (with Leaflet). The image itself is an Adobe Illustrator-File, so its actually a bunch of vector graphics. To makes things easy, we just converted it into a large .png (27.000x19.000 px) and then used MapTiler to create the needed MapRessources for Leaflet, easily included within a TileLayer.
The Problem is:
The user needs to be able to dynamically add and remove different Layers (== Filter) of the map to show more or less informationes from the picture. So we first created those layers within the Illustrator-File, then exported every layer as its own transparent .png-File, mapTiled it and included it as an own Leaflet-Layer.
Right now, we have 6 Filter-Layers and two more base Layers for the background and an overlay. This means that when all Filters are activated (which is the default), we have 8 Leaflet-Layers stacked on top of each other at once. As you can imagine, this causes some performance issues in the Browser, since Leaflet has to load and render 8 Layers with all its Tiles (depending on Screen size up to 25 at once) for every zoom or drag-action. Its still in a point that is not impossible to bear, but we are expecting several more filters to come and therefore wanna stay scalable in the future.
This means we will somehow have to change our approach of generating the Layers.
MY APPROACH SO FAR
Since we actually have a vector-graphics based map, i thought there have to be better alternatives. But it seems that we have a rare case of requirements, since my researches mostly ended in dead ends, especially since most of the cases only cover REAL geographical maps, but what we have is a raster map. I also thought about somehow putting the map into a GeoJSON or redraw it somehow directly with SVG, but since we have LOTS of single elements on the map (> 20k), I dont think this would perform much better.
So I kind of need to stay with the Bitmaps, and therfore my main goal is simple: I wanna reduce the number of layers by merging the tiles of the currently activated Filter into one single .png which then gets delivered to leaflet within ONE Layer. I spent some hours now researching, but I always run into dead ends since it seems we have a rare case of requirements here (especially since most people deal with georeferenced data, not with custom raster maps).
So right now, I can think of 2 different options:
Create ONE Layer for every Filter-Combination. This means we would have to create 2^n layers, so this would only work up to a certain number of filters (which probably will increase) - therefore, i would prefer another solution (this is only last case)
Use MapServer and somehow import my Layers. Then we could merge the Layers on runtime with a query (I read about Union Layer here) and therefore only deliver ONE Layer to leaflet.
MY QUESTION
I have absolutely no experience with MapServer and im therefore not even sure if that is a use-case or if its capabale of doing this, and more important: If it would really give us a performacne boost, since it probably requires a lot of logic ServerSide.
Before i start spending another hours to try this out:
Can someone who already worked with MapServer give me some feedback if that is even a good idea or if I am misunderstanding something with MapServer completely?
Also, if someone has another alternative or idea for me, you´re more than welcome to share it, im grateful for every input. :)
Thanks in advance!
You might want to look at OpenLayers where you can display a mix of raster and vector layers. another option might be mapcache a tile caching engine part of the mapserver project. This has the ability to do vertical assembly of tiles. So if you case where you have 8 layers you can ask mapcache to stack all the eight tiles into a single tile. You can give it a list of layers to stack and it takes care of it for you. You can also do this with mapserver. The difference being that mapcache is a lightweight apache module that just works with tiles and is probably a little faster. Mapserver is a cgi-bin process and work efficiently at rendering and combining raster layers but is probably not as fast as mapcache for simple assembly of tiles.

Tips on creating a custom view layout for a diagram

I need to create an algorithm to layout some hierarchical data but have never done this kind of thing before and need some broad tips.
Basically I need to recreate this diagram (with dynamic data):
diagram http://dl.dropbox.com/u/15126868/diagram.png
bigger
I don't have a problem with most of it but need help with two things:
How do I approach writing a layout algorithm?
Should I use UIView subclasses for all discs or use quartz (I do need interaction)
Any suggestions most welcome. I don't need too much detail.
A bit more detail:
I'm currently thinking I should use UIView subclasses and layoutSubviews. Trouble is I need to know the size (at least roughly) of all nodes before I can start to position them. Then, as the positioning involves rotation, I may need to adjust child positioning again - and I can't add labels until after any rotation.
Other considerations seem to be: that the presentation area is rectangular, not square; that I can't spill off the page; and that I will need to animate changes to the sizes of the discs.
Any pointers would be great, thanks.
This sort of thing is very difficult.
Interestingly the perhaps main actual initial constraint here is the size of the typography.
In the example given: Observe they could have chosen a different SCPT** somewhat larger (perhaps, 10%-15% larger) or somewhat smaller and it would have still worked. They made an aesthetic decision on the SCPT.
White space is critical to design. Their particular graphic designer happened to like the particular feel of white space which you see. But it would have by no means been "wrong" with a smaller SCTP. Further, observe they could have used an even larger SCPT ... IF ... they used a smaller point size on the typography.
Note that in any event you simply won't be able to display that much type that small on an iPad (or Fone4).
So straight away you have to make decisions about how the type will appear, popup, audio or whatever. Even the white type ("on the discs" type) will give you trouble.
You will have to do lots of tests with photoshop first on to your iPad before even proceeding with an algorithm. So purely for what it's worth...
Here's how I personally would do this sort of thing. In general plan: I would try to do a squishy algorithm that retries itself until it finds a result it is happy with.
IMHO, based on previously doing this type of thing: this problem is too hard to get it done in one go with some particularly smart-ass heuristic. Since there is no one smart-ass heuristic that will save the day, I'd do this:
1) calculate the total trillions to display. (it looks like about 2.5 is the total in the example image)
2) guess a SCPT value to begin with. what about for example "18" based on the actual image at the screen size we see above as posted inside your question.
3) put the big one (sun) in the dead center, and for the middle ones (planets) -- just choose a very easy heuristic, what about from biggest to smallest going anticlockwise srtaing at the top left (don't try to get clever than that with that part of the problem - which indeed could be a huge research project purely on it's own) .. and do the same with the small ones (moons).
4) for the sticks between planets and moons - adopt a trivial solution (like "always 0.5 cm"!!) and that's that. with AI you gotta cut your losses .. everywhere! :) Fix the moons to the planets and forget about them.
5) Now a hard part .. run some sort of heuristic over them that evenly balances what you have so far. treat color as mass and no color as no mass and move the "sun" until it is balancedish. (to be clear, as an example that would be likely downwards if you followed the "planet" layout mentioned in 3.) maybe also move all the planet/moon systems in-out to try to balance it.
6) next the iteration. look at that result and decide if you like it! go back to (2) and pick a new value. (maybe "16!" for example)
(7) there are two possible outcomes here. it might be that during development, there is one magic value for SCPT that always works. perhaps "14.3" or "18.2" or whatever. if you find such a value, never tell anyone. keep it as your own secret information!!!! milk it for everything it is worth with clients. conversely and more difficultly, you might find you need a different value each time. in that case: your AI will have to on it's own iterate through values until it finds one it likes. (for example, by determining whether all your labels fit or not .. and obvious things like "are they touching" "all on screen" etc.)
Anyway FWIW (perhaps nothing) that is what I would do - an iterative approach based on a first guess for the SCPT.
Incidentally: you may well want to buy and study the classic and brilliant book on this sort of display of information!!! Everyone should have a copy.
Tufte's The visual display of Quantatative Information
by Edward R. Tufte
ISBN 0961392142
Regarding the mechanics of laying out the image. You should use quartz or any other low-level drawing - forget about UIViews and the like. You should surely completely separate the logic from the drawing layer, so (even if you do want to change to UIViews, OpenGLES, or whatever) it's only changing a few lines of code.
Hope it helps somehow.
Notes...
** SCPT .. square centimeters per trillion
Followup...
"To keep the logic separate would you use a manager-type pattern?"
To be honest: if I was doing it, I would just start a whole new app purely for the "research" of getting this part, this challenge, working right. In that app (to be honest!) I would make bugger all effort to do anything in any tidy manner whatsoever! :-/ Globals everywhere! :) Unfortunately for me I can only think of the one thing at a time, so at that stage I would only be thinking about the algorithm, per se.
I believe, once you cracked the problem per se, once you came to implement it in a bigger project ... really, FWIW, if it was me, I'd simply make it a class (let's say AmazingClass) nothing more complicated than that. Personally I would set the data somewhere separately (whether in a DB or just an array or whatever) and I would just let the AmazingClass take care of getting the data, even. (My thinking - you never know how the hell you're going to need the data and when, at what point in the process of AmazingClass. So, just give up and let AmazingClass take it as and when it wants it.)
If you are familiar with these awesome-sounding manager-patterns of which you speak - yeah, why not! In short I would heavily separate it out as much as possible. I'm not good enough to speak on the best way to do that - but just completely separate it out somewhere. Sorry I can't help on that one.

Line smoothing in Cocoa Touch

How would I smooth a line (UIBeizerPath) or a set of points? Right now it draws it jagged. I read about spline interpolation, could anyone point me to an implementation of this in cocoa or C or give me an alternate line smoothing algorithm.
I don't think you need to do Bezier paths with curves. You can keep drawing straight line segments but add more data points with interpolation. This is especially important because I'm assuming you want to smooth only on one axis so you don't end up with odd things like loops in your graph.
So you want to add more points to your source data, between the existing points, and use an interpolation algorithm that's more sophisticated than a linear interpolation. There are many to choose from. Quadratic? Sine-based? Many, and it depends on what kind of data you're using.
Quartz (which UIKit uses for drawing, and in many places makes you use directly for drawing) has anti-aliasing support built-in. Most contexts have it turned on already, so you should not have aliased (jagged) drawing unless you're turning anti-aliasing off. So, stop doing that. :-)
The contexts that don't have it turned on by default are mostly those where it isn't appropriate, such as PDF contexts and CGLayer contexts. The documentation implies that those contexts don't even support anti-aliasing, which makes some amount of sense.
CGContext provides a couple of functions for turning anti-aliasing on and off, but you should never need to call them except when you want aliasing, which you don't. You could try turning it on using those functions; if that works, then you should investigate why it was ever off in the first place.
Are you drawing the path from within a CALayer? That may be why it's off; there's an Info.plist key you have to turn on to get anti-aliasing turned on by default in such contexts.
I've found that if you draw a line or an image on the edge of your frame that it will appear jagged. Move the line in a few (or grow your frame) and it should appear nice and crisp. Again, not sure if that's your question or not but it has bit me a few times.
For instance if you are displaying an image inside a CALayer, make sure there is space between the image and the frame if you are doing anything but 90 degree angles.