I am aiming to build a global/general forecasting model (don't know what's the proper terminology) using deep learning. The idea behind this is to create a model trained on several time series that will allow me to obtain forecasts for the time series used during training and for others not used during training.
I think this is a mixed problem, classification-regression. If I am correct and this is a mixed problem then: How should I do it?
Should I work only with the variable/s I aim to forecast (using lagged observations). I mean passing through the model in question (I think it could be a CNN-LSTM or a LSTMGC model) only the objective variable/s. If this is an option I would like to be explained or pointed to an explanation of how this model works and how should the data for the training be structured.
Or should I pass to the model the categorical variables as well. Being in the case of "product sales" variables like region, type of product the categorical ones and sales or amount of selled items the variables to forecast. Regarding to this I have the same doubt as before. (In addition think this could be a model easier to interpret)
Will be of a great help if anyone could point to or explain to me if there is any methodology on how to solve this kind of problems using deep learning. Aspects like what are the most typical ANN structures to solve this kind of problem, how should the data be prepared and how should the model be trained.
Thanks in advance for all the help.
Original question posted in Artificial Intelligence Stack.
Related
I am new to Neural Network and I dont know what exactly to search on google for solution,here is my problem ,if you kindly please let me know what I am looking for,
So I am working on a project where,it will have many contributors over time,and each contributor will write a new line on excel file and then run the code to train dataset,
if want to ask is that ,is there a way to save a checkpoint so each time the code don't have to train the whole dataset and just continue to train the new entries instead of starting from zero.
Please let me know what exactly I should google.
Kind regards
This is, as you guessed, extremely common and usually referred to as "fine-tuning". In your case, since the dataset barely changes between training runs, you can expect the model to be very similar, so you could initialize your weights to the weights of the previous best model and retrain for only a few epochs, likely with a small learning rate.
People usually do fine-tuning starting from a network trained on an entirely different dataset, so it's likely that you will find that use-case rather than yours, but it will work even better if you keep a very similar dataset.
"Continual learning without forgetting"
Is it possible to use Tensorflow or some similar library to make a model that you can efficiently train and use at the same time.
An example/use case for this would be a chat bot that you give feedback to. Somewhat like how pets learn (i.e. replicating what they just did for a reward). Or being able to add new entries or new responses they can use.
I think what you are asking is whether a model can be trained continuously without having to retrain it from scratch each time new labelled data comes in.
Answer to that is - Online models
There are models that can be trained continuously on data without worrying about training them from scratch. As per Wikipedia definition
Online machine learning is a method of machine learning in which data becomes available in sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once.
Some examples of such algorithms are
BernoulliNB
GaussianNB
MiniBatchKMeans
MultinomialNB
PassiveAggressiveClassifier
PassiveAggressiveRegressor
Perceptron
SGDClassifier
SGDRegressor
DNNs
I am a total beginner to ML and Neural networks. I am currently working on a project where I have a lot of pictures stored in a MongoDB database. Each one of those pictures has a number from 0 to 1. For example "picture 1" 0.71.
I want to train my model given the database. The main goal for the project is that after the model is finished and trained, given an image the model will be able to return(predict) a number from 0 to 1. After doing some research and asking a few people I figured out some libraries that would be useful for the project are: Tenserflow and Keras. Some people told me that it is impossible, but I'm not sure therefore I came to ask here.
So my questions are: Is it possible? If so, how can I implement it? Are there any specific tools you recommend? If you specify a way that I should use for my project do I need to export my MongoDB database in a certain form? Since I am a beginner maybe there are some tutorials that you think that can help?
I'm sorry if this question is a bit too general, if there are any misunderstandings please comment and I will try to answer.
Thanks in advance!
What you want to do is totally feasible, this kind of project is called regression, since you are using images data the best type of models are called convolutional neural network (CNN), you'll need some understanding if you want to build your own model. I've done a project where I had to predict a number of bacterial colonies using an image, much like your problem except that I had no boundaries on the predicted values.
What is a CNN ? Here is a link
Basically a CNN will understand the features in the images and will use those features to predict a value.
You won't need to create your own model, most people just use well-designed one in the scientific litterature.
Go for keras, it's the easiest framework out there and work like a charm. Here is how to implement VGG16 (an architecture that is probably the best for your problem) : link
You should follow this tutorial to get going on developing with keras.
Last hint: don't use the same last layer as the one on the VGG16 implementation, use a Dense Layer with one neuron and with a sigmoid/linear/leaky relu activation.
ie:
#model.add(Dense(1000, activation='softmax'))
model.add(Dense(1, activation='sigmoid'))
This means : predict 1 number (sigmoid will bound it between 0 and 1, but maybe lrelu or linear is better)
Also, I guess you could use MongoDB to read the images as arrays, but I would just put the images on a folder.
Edit : When compiling the model, use a mean squared error as in
adam = keras.optimizers.Adam(lr=1e-4)
model.compile(optimizer=adam, loss='mse')
Here you have the "hello world program" in terms of neural networks and digits classification. You can start studying it because I think you will end up with a similar architecture for your NN. What you should focus on is the output of your model, because in this example they are performing classification on 10 classes (digits from 0 to 9) but you are trying to read a real number. You could try to use a single neurone with sigmoid or linear activation at the end of your model.
I have a problem that I have tried to solve using Support Vector Machines (SVMs) to discriminate 1d series of data between two classes. One of the classes have very specific characteristics and are easily distinguishable from a human perspective, the only drawback is that the other class has data with a lot of variation from data sample to data sample, and it looks like it is not feasible to use this as a class at all. I'm only interested in discriminate between data that is from the class of interest (see image under) and all other "uninteresting" data. Then I tried implementing a one class SVM (OC-SVM), and it looks like it works okey but not as well as I had hoped. Therefore I started looking at alternatives, and came across one-class neural networks and Generative Adversarial Networks (GANs) as a possible solution. The Idea is that since the data points that I want to detect has a certain characteristic (see Image under) then an Adversarial network could preform well. I am very new to the field of neural networks and deep learning, so I wanted to ask the community if I am on to something before diving into it. Feel free to come up with alternative methods as well.
Ps: Unsupervised methods and clustering has not worked well solving this problem because of huge variations in the data.
Image of data of interest
I am currently running a multiple linear regression using MATLAB's LinearModel.fit function, and I am bit confused in regards to how to properly add interaction terms to the model by hand. As I am aware, LinearModel.fit does not standardize variables on its own, so I have been doing so manually.
So far, the way I have done it has been to
Standardize the observations for each variables
Multiply corresponding standardized values from specific variables to create the interaction terms and then add these new variables to the set of regression data
Run the regression
Is this the correct way to go about doing this? Should I standardize the interaction term variables also after calculating the 'raw' terms? Any help would be greatly appreciated!
Whether or not to standardize interaction terms probably depends on what you intend to do with the model. Standardization typically does not affect model performance as much as it allows for more straightforward model interpretation as your learned coefficients will be on similar scales. I suspect whether to do this or not is largely a matter of opinion. Here is a relevant stats.stackexchange post that may help.
My intuition would be the same as how you have described your process so far.