I'm building different size modules and I want to reflect that in my simulation run presentation aspect. I have the parameters set (height, width, length) on the agent but it wouldn't change when I run the simulation. The first image shows how it is now but I want it to look like the second image I read the anylogic help but its not very helpful. Where is a good source to do this?
Related
If you have a file which include objects for example for EE like transistors, resistors etc and if you group them into one and then from the corner drag it to zoom in a bigger figure.
How can I make sure that these components are not zoom in only wiring changes?
The problem is that I have like 30 images with different sizes and I'm placing them in a table with many images side by side. However, if I keep the same scale then some images looks small compared to other. So I tried to scale them to get the same size. However, this make the components's sizes are also scaled up with different scale factors.
Here is an example of circuit using the bult-in shapes in Visio. As you can see the components'sizes got bigger when I scaled up the object. This is usually desired. However, in my specific case I want to keep the component's size same.
Here is the Visio file or I think you can also use any available components in Visio.
https://file.io/VRUCR8yVgYxs
I know how to use the CoreML library to train a model and use it. However, I was wondering if it's possible to feed the model more than one image in order for it to identify it with better accuracy.
The reason for this is because i'm a trying to build an app that classifies histological slides, however, many of them look quite similar, so I thought maybe I could feed the model images at different magnifications in order to make the identification. Is it possible?
Thank you,
Mehdi
Yes, this is a common technique. You can give Core ML the images at different scales or use different crops from the same larger image.
A typical approach is to take 4 corner crops and 1 center crop, and also horizontally flip these, so you have 10 images total. Then feed these to Core ML as a batch. (Maybe in your case it makes sense to also vertically flip the crops.)
To get the final prediction, take the average of the predicted probabilities for all images.
Note that in order to use images at different sizes, the model must be configured to support "size flexibility". And it must also be trained on images of different sizes to get good results.
I am using Dymola. Assuming that I have two components in my model, I want to use the same visual size for the components that share the same type.
So how could I set the visual size of one component according to another one?
I am not planning to use annotation code, cause that could be too much trouble when there are many components?
I think using annotations will be the only way to go. This is where the position and size of a component are determined. The only way that comes to my mind is using parameters to set these positions as (partially) shown below.
model pos_params
parameter Real pos_x1 = -10;
Modelica.Blocks.Sources.Constant const annotation (Placement(transformation(extent={{pos_x1,-10},{10,10}})));
end pos_params;
To get to your result you would need to have some additional parameters in multiple components.
Still, doing this in Dymola will make make the graphical manipulation of size an position of a model cumbersome, as the icon will be set to have zero size.
There is no way around annotations, because they define the graphical representation of components. But you can copy-paste the relevant extent values easily from one component to another using the Annotation window.
I have a fully convolutional neural network, U-Net, which can be read below.
https://arxiv.org/pdf/1505.04597.pdf
I want to use it to do pixelwise classification of images. I have my training images available in two sizes: 512x512 and 768x768. I am using reflection padding of size (256,256,256,256) in the former in the initial step, and (384,384,384,384) in the latter. I do successive padding before convolutions, to get output of the size of input.
But since my padding is dependant on the image/input's size, I can't build a generalised model (I am using Torch).
How is the padding done in such cases?
I am new to deep learning, any help would be great. Thanks.
Your model will only accept images of the size of the first layer. You have to pre-process all of them before forwarding them to the network. In order to do so, you can use:
image.scale(img, width, height, 'bilinear')
img will be the image to scale, width and heightthe size of the first layer of your model (if I'm not mistaken it is 572*572), 'bilinear' is the algorithm it is going to use to scale the image.
Keep in mind that it might be necessary to extract the mean of the image or to change it to BGR (depending on how the model was trained).
The first thing to do is to process all of your images to be the same size. The CONV layer input requires all images to be of the specified dimensions.
Caffe allows you a reshape within the prototxt file; in Torch, I think there's a comparable command you can drop at the front of createModel, but I don't recall the command name. If not, then you'll need to do it outside the model flow.
I'm using FlowCoverView, an open source (and AppStore compliant) alternative to Apple's cover flow (you can find it here http://chaosinmotion.com/flowcover.m)
How can I change the tile (or texture as it's called in the library) size (statically at least)?
The code comes with a statically fixed 256x256 tile size. This is fixed using the TEXTURESIZE define and hard coding the number 256 within the code.
I tried to replace all 256 occurrence with the TEXTURESIZE define and it works... As long as the define is set to 256! As soon as I put another value, I get white 256x256 images in the flow view (I pass the correctly dimensioned UIImages through the delegate of course) :(
I can't find where in the code this 256 value is used. I repeat: all 256 occurrences were replaced by the TEXTURESIZE constant.
PS
Of course, the next step to improve this nice library will be to make the TEXTURESIZE a property, to be set at runtime...
Use this. It is much easier to implement then yours.
https://github.com/lucascorrea/iCarousel
Change the TEXTURESIZE to 512 . clean the build and run it