What does "step size" actually do in DeepDream? - neural-network

What does "step size" actually do in DeepDream?
It seems like the number of octaves, and the octave scales are basically resizing the image in certain patterns, while the iteration count is building on the hallucinated details. But what is the "step" or "step size" value doing during the DeepDream process?

Step size in DeepDream is just the learning rate.

Related

the impact of time step size on the results in OpenModelica

Hello stack overflow community
am curious to know how the time step size value can impact the OpenModelica simulation results .
and how to optimize the sumilation periode so that we can accelerate the simulation to have results in a shorter time
and also what does impact the simulation time , like the computer performance and the complexity of the code !!!
If you use an explicit (fixed-step) solver such as Euler, the step size will have a major impact on the stability of the results.
If you use an implicit (usually multi-step) solver such as Dassl, the step size will not really impact any performance or results except the values printed to the result-file are interpolated to these points by the solver. If you want to make it run faster and be less accurate, you increase the tolerance of the solver.
https://www.openmodelica.org/doc/OpenModelicaUsersGuide/1.16/solving.html#integration-methods
Just to clarify the nomenclatures. When you say in this post 'Step size' are you referring to the Interval parameter?
Moreover I've a couple of questions:
What's the scope of 'Initial time step' and 'Maximum time step'. How are they correlated with Interval and Tolerance?
What's the scope of 'Equidistant time grid' and 'Store variables at events' in the Output tab
Thanks

Improving short text cluster performance

I am currently doing short text clustering and have implemented gsdmm from this github link with a dataset of size 2675 and vocab size of 1231.
However, the clustering result is not very accurate and I think this might be because the dataset contains similar words/ phrases that have different meanings in the domain that I'm working on.
Examples of similar phrases that has different meanings:
"business process management", "business process modelling" and "business model canvas"
"process workflow model", "process orientation" and "process innovation"
I have tried using bigrams and trigrams but it didn't solve the problem.
Are there any other ways which I can improve the results? Are there any other algorithms that are good for clustering short text?
Did you play with the parameters of the GSDMM Algorithm? My results changed dramatically when I used the algorithm with diffrent alpha and beta values. So if you have an idea how many cluster you expect, you can look for an alpha beta combination that is in the right area.
If you don´t have that information, I would suggests you to try diffrent values and compare the results.
Other than GSDMM I don´t know any short text clustering algorithms.

H2O.ai mini_batch_size is really used?

In the documentation of H2O is written:
mini_batch_size: Specify a value for the mini-batch size. (Smaller values lead to a better fit; larger values can speed up and generalize better.)
but when I run a model using the FLOW UI (with mini_batch_size > 1) in the log file is written:
WARN: _mini_batch_size Only mini-batch size = 1 is supported right now.
so the question: is the mini_batch_size really used??
It appears to be a left over from preparation for a DeepWater integration that never happened. E.g. https://github.com/h2oai/h2o-3/search?l=Java&p=2&q=mini_batch_size
That makes sense, because the Hogwild! algorithm, that H2O's deep learning uses, does away with the need for batching training data.
To sum up, I don't think it is used.

Simulink "Counter Limited" block with dynamic upper limit

The Simulink Library Block "Counter Limited" (Simulink/Sources/Counter Limited) counts up from zero to a specified upper limit. It then wraps round to zero and counts back up. This happens at a defined rate (sample time). The mask parameters are "Upper Limit" and "Sample Time".
My simulation contains a discrete-time cyclic process with a variable cycle duration, i.e. the number of samples per cycle varies (sample time is constant).
Question: Does anyone know how to make the mask parameter "Upper Limit" dynamic? I would like to pass the number of samples for the current cycle to the "Counter Limited" block at the beginning of each cycle. The current number of cycles is calculated in Simulink but I don't know how to pass it to the "Counter Limited" block correctly.
Thanks a lot for any suggestions offered!
You'll need to roll your own counter implementation. Something like the following will enable the reset value (in this case 6) to be specified as a signal rather than a parameter. Note that the Unit Delay in the feedback path is needed to prevent an algebraic loop.

How to change the sampling time for all of the model blocks in Simulink?

I have a model and I need to change the sample time of each block that I currently have in my Simulink model.
The problem is that I have so many blocks that make changing this parameter for each individual one cumbersome. Is there a means to change it for a group of blocks?
One more thing, what is the default sample time indicated by "-1"?
This can be done quite easily. In general it is a good practise to be aware of the simulation time, simulation steps and solver you are using in simulink simulations, as sometimes the simulation can go wrong just because of the solver, or because of the simulation step size.
To change all this parameters (and the step size, what I assume is your "sampling time")
you need to go to the Solve Pane that looks like this:
You can see in there how "Max step size" and "min step size" are there, set to auto. This two exist because some odes (as ode45 in this case) use variable step size, but if you want fixed step size you can change the solver to ode1 or ode3 for example.
About that -1 thing... You should not change each blocks sample rate unless you really meant to. When do yo want to do this? In general when you want the sample rate of THAT specific block to be smaller than the rest. So if you have a simulation that is running the whole system at 1e-2 sample rate, and you have an specific block thatneeds to run just every second, then you change the sample rate. Else the default is -1, which means the same sample rate that you have set up in the Solve Pane.
So:
ALWAYS be aware of whats going on in the Solve Pane
Dont change those "-1" unless you really meant to