MATLAB / SIMULINK: Simulate filling and emptying of a river - matlab

Background:
A river has a non-constant cross-section. Under standard conditions, the water level amounts h_Std (see figure below).
As it begins to rain, the water level rises until it is equal to h_Rain (see figure below)
After the rain stops, the water level decreases back to the standard water level.
As one can see from the diagrams, the relation between water level and volume is non-constant. However, the function can be desribed mathematically and is known for my particular cross-section of the river.
Problem description:
I want to simulate the water level of the river over time in case of raining. The rain is represented by an signal which can either be 0 (not raining) or 1 (raining) (see red curve in figure below):
The dark-blue parts of the lower diagram are nonlinear and represents the section between h_Std and h_Rain from the diagrams above. The time for completly filling the river is known (t_Fill).
Generaly spoken, I want to activate an user-defined function (in my case the relation between the amount of water / rain and the water level of the river) triggered by an external signal (in my case represented by the "rain"-signal).
How can I obtain such a function (either with a piece of Matlab-Code oder with Simulink blocks)?

There are several ways this could be done, one of which is to use enabled subsystems to process the raining and not raining phases. You'd need to change the contents of the 2 subsystems below to reflect your exact height profile in the 2 regions.

Related

What are "Activations", "Activation Gradients", "Weights" and "Weight Gradients" in Convolutional Neural Networks?

I've just finished reading the notes for Stanford's CS231n on CNNs and there is a link to a live demo; however, I am unsure by what "Activations", "Activation Gradients", "Weights" and "Weight Gradients" is referring to in the demo. The below screenshots have been copied from the demo.
Confusion point 1
I'm first confused by what "activations" refers to for the input layer. Based on the notes, I thought that the activation layer refers to the RELU layer in a CNN, which essentially tells the CNN which neurons should be lit up (using the RELU function). I'm not sure how that relates to the input layer as shown below. Furthermore, why are there two images displayed? The first image seems to display the image that is provided to the CNN but I'm unable to distinguish what the second image is displaying.
Confusion point 2
I'm unsure what "activations" and "activation gradients" is displaying here for the same reason as above. I think the "weights" display what the 16 filters in the convolution layer look like but I'm not sure what "Weight Gradients" is supposed to be showing.
Confusion point 3
I think I understand what the "activations" is referring to in the RELU layers. It is displaying the output images of all 16 filters after every value (pixel) of the output image has had the RELU function applied to it hence why each of the 16 images contains pixels that are black (un-activated) or some shade of white (activated). However, I don't understand what "activation gradients" is referring to.
Confusion point 4
Also don't understand what "activation gradients" is referring to here.
I'm hoping that by understanding this demo, I'll understand CNNs a little more
This question is similar to this question, but not quite. Also, here's a link to the ConvNetJS example code with comments (here's a link to the full documentation). You can take a look at the code at the top of the demo page for the code itself.
An activation function is a function that takes in some input and outputs some value based on if it reaches some "threshold" (this is specific for each different activation function). This comes from how neurons work, where they take some electrical input and will only activate if they reach some threshold.
Confusion Point 1: The first set of images show the raw input image (the left colored image) and the right of the two images is the output after going through the activation functions. You shouldn't really be able to interpret the second image because it is going through non-linear and perceived random non-linear transformations through the network.
Confusion Point 2: Similar to the previous point, the "activations" are the functions the image pixel information is passed into. A gradient is essentially the slope of the activation function. It appears more sparse (i.e., colors show up in only certain places) because it shows possible areas in the image that each node is focusing on. For example, the 6th image on the first row has some color in the bottom left corner; this may indicate a large change in the activation function to indicate something interesting in this area. This article may clear up some confusion on weights and activation functions. And this article has some really great visuals on what each step is doing.
Confusion Point 3: This confused me at first, because if you think about a ReLu function, you will see that it has a slope of one for positive x and 0 everywhere else. So to take the gradient (or slope) of the activation function (ReLu in this case) doesn't make sense. The "max activation" and "min activation" values make sense for a ReLu: the minimum value will be zero and the max is whatever the maximum value is. This is straight from the definition of a ReLu. To explain the gradient values, I suspect that some Gaussian noise and a bias term of 0.1 has been added to those values. Edit: the gradient refers to the slope of the cost-weight curve shown below. The y-axis is the loss value or the calculated error using the weight values w on the x-axis.
Image source https://i.ytimg.com/vi/b4Vyma9wPHo/maxresdefault.jpg
Confusion Point 4: See above.
Confusion point 1: Looking at the code it seems like in the case of the input layer the "Activations" visualisation are the coloured image for the first figure. The second figure does not really make any sense because the code is trying to display some gradient values but it's not clear where they come from.
// HACK to draw in color in input layer
if(i===0) {
draw_activations_COLOR(activations_div, L.out_act, scale);
draw_activations_COLOR(activations_div, L.out_act, scale, true);
Confusion point 2, 3 & 4:
Activations: It is the output of the layer
Activation Gradients: This name is confusing but it is basically the gradient of the loss with respect to the input of the current layer l. This is useful in case you want to debug the autodif algorithm
Weights: This is only printed if the layer is a convolution. It's basically the different filters of the convolution
Weight Gradients:It is the gradient of the loss with respect to the weights of the current layer l
Confusion point 1
For Convolutional Layers, every layer has a duty to detect features. Imagine that you want to detect a human face, first layer will detect edges, maybe next layer will detect your noses and so on. Towards last layer, more complex features will be detected. In first layer, what you see is what first layer detected from image.
Confusion point 2
If you look through fully connnected layers, I think they probably showing up the gradients they obtained during back-propagation. Because through fully connected layers, they get only gray black etc colors.
Confusion point 3
There is nothing relu layers. After convulution you use activation function, and you get another matric, and you pass it through another layer. After relu, you get the colors.
Confusion point 4
It is same above.
Please let me know when you don't understand any point.

How to make pedestrians appear at AreaNode with attractors from a pedSource

I am working on an evacuation project for a floor and would like to create a distribution of pedestrians from the pedSource block. These pedestrians will already appear in an area when I run the simulation. I want to obtain a fixed number of pedestrians in one area while the rest is distributed to other areas.
I have made a collection of areas that pedestrians will appear using allLocations (area, area1, area2 and OfficeArea). The event is triggered by an event and using a delay block. The max amount of pedestrians at the given floor is 100
Image of block flowchart
Image of floor layout plan
This is the code I tried where pedestrians would appear in the areas:
allLocations.get(uniform_discr(0, allLocations.size()-1))
I expect a fixed 10 number of pedestrians in the office area and positioned where I set the attractors, but the actual result shows more than 10 number of pedestrians and do not appear at the set attractor.
Image of actual result
Setting an attractor as a target for pedestrians is according to the documentation only working for the blocks pedWait and pedGoTo (I could actually only get it to work with pedWait, not with pedGoTo). Therefore you cannot initialize agents directly onto attractors using the initial location or the jumpTo() function.
You have several options as a workaround:
Extract the x,y coordinates of the attractor, and use the type point(x,y) to define the initial location or the location for the jumpTo()
Instead of using (graphical) attractors consider just defining points by code directly
Use very small individual areas instead of one large area with attractors
Use a pedWait block in your process flow and let your pedestrians 'walk' to their initial positions. Give the model a short time until everybody is on the desired location before starting your evacuation. You can also run the model in super fast mode during this initial phase, so that it will barely be visible.

How to transfer water from cylinder to tank in Dymola?

I've created a Dymola model. It has an empty tank, which is connected to the output of sweptVolume component via a static pipe. Input to the sweptVolume is a constant force, with the help of which I would like to transport water from the hydraulic cylinder to the tank.
I've assumed the cross sectional area of the piston. I've calculated the force that is needed to displace the water in the cylinder assuming the pressure to be atmospheric (101.325kPa). But, somehow I see the water is not getting displaced and the volume is remaining constant without filling the tank.
Please suggest, what type of input should be given for the sweptVolume element (position,move etc.), in case the given input constant force is wrong.
I would like to thank you for your time and interest.
the way to setup initial conditions it is only a matter of GUI, just add "flange(s(start=1, fixed=true))" in add modifiers tab of the sweptVolume parameter dialog in Dymola. To get your model work just invert the sign of the force, the sign convention for the force block is displayed by the arrow, so to compress the piston and fill the tank have to set the const value to minus something. Check the fluid volumes since you will get the model to stop when tank overflows or when piston stroke reaches the end (negative value of s). So you have to setup correctly the forces, or the tank and piston volumes or place some kind of stop to the mechanic part of the piston. The model can work fine even without masses added to the piston.
Hope this helps,
Marco

Biological Cell shape detection with Matlab

I have a problem with shape detection on Matlab. I got two types of circular cell shapes but one is erythrocyte that has little difference to another cell that is leukocyte and is also circular. How could I distinguish them from each other with image processing?
Maybe will parent-child relationship be useful to detect circle in erythrocyte? Or other techniques?
There are 4 types of cell detection/segmentation: pixel-based, region-based, edge-based and contour-based segmentation. You may use one or several combinations of them for your task. But counting only on the shape may be insufficient.
The main difference between erythrocyte and leukocyte is the existence of nucleus. To my knowledge, the nucleus staining is often applied to microscopy. If that is the case,
(i) the ratio between green and blue channel intensities of each pixel can be used as discriminating feature to separate the nucleus pixels from other foreground pixels;
(ii) After that, it is possible to extract the leukocyte plasma based on the hue-value similarity between the pixel from that region and the nucleus region;
(iii) Contour-based methods such as active-contour methods (snakes) and level-set approaches can be used to refine the boundaries of white blood cells;
(iv) What left to you after (i)-(iii) are probably the erythrocytes. If your task also includes the segmentation of erythrocytes, you may threshold them easily (or search the studies for more accurate segmentation algorithms).
I would recommend T.Bergen et al, Segmentation of leukocytes and erythrocytes in blood smear images. My description above was included and detailed in this paper, and they applied more sophisticated strategies to improve the boundary accuracy. You may try to follow their steps and reproduce the similar result if your ultimate goal is also segmentation. Yet only detection without extraction might be much easier.

Automatic Vehicle Plate Recognition system

I was currently doing a project on recognizing the vehicle license plate at the rear side, i have done the OCR as the preliminary step, but i have no idea on how to detect the rectangle shaped(which is the concerned area of the car) license plate, i have read lot of papers but in no where i found a useful information about recognizing the rectangle shaped area of the license plate. I am doing my project using matlab. Please anyone help me with this ...
Many Thanks
As you alluded to, there are at least two distinct phases:
Locating the number plate in the image
Recognising the license number from the image
Since number plates do not embed any location marks (as found in QR codes for example), the complexity of recognising the number plate within the image is reduced by limiting range of transformation on the incoming image.
The success of many ANPR systems relies on the accuracy of the position and timing of the capturing equipment to obtain an image which places the number plate within a predictable range of distortion.
Once the image is captured the location phase can be handled by using a statistical analysis to locate a "number plate" shaped region within the image, i.e. one which is of the correct proportions for the perspective. This article describes one such approach.
This paper and another one describe using Sobel edge detector to locate vertical edges in the number plate. The reasoning is that the letters form more vertical lines compared to the background.
Another paper compares the effectiveness of some techniques (including Sobel detection and Haar wavelets) and may be a good starting point.
I had done my project on 'OCR based Vehicle Identification'
In general, LPR consist of three main phases: License Plate Extraction from the captured image, image segmentation to extract individual characters and character recognition. All the above phases of License Plate Detection are most challenging as it is highly sensitive towards weather condition, lighting condition and license plate placement and other artefact like frame, symbols or logo which is placed on licence plate picture, In India the license number is written either in one row or in two rows.
For LPR system speed and accuracy both are very important factors. In some of the literatures accuracy level is good but speed of the system is less. Like fuzzy logic and neural network approach the accuracy level is good but they are very time consuming and complex. In our work we have maintained a balance between time complexity and accuracy. We have used edge detection method and vertical and horizontal processing for number plate localization. The edge detection is done with ‘Roberts’ operator. The connected component analysis (CCA) with some proper thresholding is used for segmentation. For character recognition we have used template matching by correlation function and to enhance the level of matching we have used enhanced database.
My Approach for Project
Input image from webcam/camera.
Convert image into binary.
Detect number plate area.
Segmentation.
Number identification.
Display on GUI.
My Approach for Number Plate Extraction
Take input from webcam/camera.
Convert it into gray-scale image.
Calculate threshold value.
Edge detection using Roberts’s operator.
Calculate horizontal projection.
Crop image horizontally by comparing with 1.3 times of threshold value.
Calculate vertical projection.
Crop image vertically.
My Approach for Segmentation
Convert extracted image into binary image.
Find in-compliment image of extracted binary image.
Remove connected component whose pixel value is less than 2% of area.
Calculate number of connected component.
For each connected component find the row and column value
Calculate dynamic thresholding (DM).
Remove unwanted characters from segmented characters by applying certain conditions
Store segmented characters coordinates.
My Approach for Recognition
Initialize templates.
For each segmented character repeat step 2 to 7
Convert segmented characters to data base image size i.e. 24x42.
Find correlation coefficient value of segmented character with each data base image and store that value in array.
Find out the index position of maximum value in the array.
Find the letter which is link by that index value
Store that letter in a array.
Check out OpenALPR (http://www.openalpr.com). It recognizes plate regions using OpenCV and the LBP/Haar algorithm. This allows it to recognize both light on dark and dark on light plate regions. After it recognizes the general region, it uses OpenCV to localize based on strong lines/edges in the image.
It's written in C++, so hopefully you can use it. If not, at least it's a reference.