I research a set of data, consisting of two data files:
The first contains user id id artists and ranking of users for artists that want to rank.
The second data file contains id and name artists
I have chosen research question which is:
Is the artist is Popular or not?
In other words,by given the new singer, who will not found in the data file, using algorithms, we will classify it as an artist and to know if it is a popular or not.
For Prediction step I chose to use logistic regression method
But my problem is earlier.
I do not know how, technically, to determine who from the existing data will be defined as successful as an artist who is unsuccessful.
I thought of some methods, for example:k-means with k=2 (but in this method i have a problem with function disance),knn with k=2 etc.
I need guidance ,refers to how i will make to clustering to the Existing data
and general tips to the project.
thank you.
Related
I am dealing with CoreData, for training, I decided to create a small application for recording user income and expenses. CoreData tutorials all contain To-Do-List examples, and I haven't found any good examples that would help me.
// MARK: - Grammar
// I want to apologize for grammatical errors in the text. Unfortunately,
// English is not my native language, so in some places I used a translator.
If something is not clear, I will definitely try to explain it again.
When I began to think over how I would implement the application, I assumed that the most convenient way would be to save all user operations and make calculations in the application in the right places. So far, abstract, since It seems to me that this has little to do with the question, if you need to be more precise, I can provide a complete idea.
So, I'm going to save the user model, which will have the following data:
User operations (Operation type) - all operations will be saved, each operation includes the category for which the operation was performed, as well as the amount in currency.
User-selected categories (Category Type) - Categories that will be used for expenses or income when adding an operation.
Wallets (Type Wallet) - User's wallets, Everything is simple, the name, and the balance on it.
Budget Units (BudgetUnit Type) - These are user budgets, contains a category, and a budget for it. For example: Products - 10.000 $
When I started building dependencies in CoreData, I got a little strange behavior.
That is, the user has a relationship on the same category model as the Budget Unit and Operation. Something tells me that it won't work that way.
I want the user categories to be independent, he selected them, and I'm going to display them on the main screen, and each operation will have its own category model
In the picture above, the category model is used 3 times, the same model.
This is roughly how I represent the data structure that I would like to see. Different models have their own category model, independently of the others.
I think it could be implemented using 3 different models with the same values, but it seems to me that this approach is considered wrong.
So how do you properly implement the data model so that everything works as expected? I would be grateful for any help!
--- EDIT ---
As a solution to the problem, I can create multiple entities as Category (Example bellow)
But I don't know if this is good practice
I looked into several other open source projects and saw a solution to the problem.
I hope this helps someone in the future.
There is no need to save the categories for the user, you can simply save the categories in the application by adding the IsSelected and ID parameter to them in order to change these parameters when you select a category, and immediately understand which ones you need to display.
For budgets and operations (transactions) , we only need to save the category ID to immediately display the correct one.
For example:
Thanks #JoakimDanielson and #Moose for helping. It gave me a different view of the subject.
This question already has an answer here:
Run ML algorithm inside map function in Spark
(1 answer)
Closed 4 years ago.
I want to train different models for each user in my dataset. Is there built in support for that in Spark MlLib/Pipelines?
If not, what's the easiest/cleanest way to train multiple and separate models for each user?
Unfortunately Spark-ML doesn't provide the ability to separate concept "single model - single user". But you can make a custom logic as you wish. I see two possible variants of solving this task.
The first scenario for solving this situation is following to the next algorithm (I took everything for example - you will have different steps, but algorithm will logically similar):
You must obtain training data for the specific user - (e.g. read data csv file from hdfs, s3 etc.)
Train model for the Dataset which depends on the user related data - let's consider the next situation your dataset has two columns - the specific criteria X and user's productivity Y and latest parameter is changeable for user group - you must train your model for instance with LinearRegression so predict if user can do work in the time or can't.
Next, you save data to the disk on call trained model depending on the
user's id, group or etc.
The second approach is to train your model so it was applicable to every user, you must choose options for algorithm so it didn't depend on group of user, in other words, generalize algorithm of training model to all user groups - in this case, you don't have a sense of separation
"single-model--> single user". If the second variant is more complicated to the implementation on your dataset, follow the first approach.
I tried to went through numerous articles trying to understand what should be my first step to incorporate associative analysis (may be Market Basket analysis) into my system. They all go deep into implementation of algorithm but no one talked about how to store data in the first place.
I will really appreciate if someone can give me some start pointers or article links that I can begin with.
The first thing I want to implement is to track user clicks and provide suggestions based on tracked data.
E.g. User clicked on link A and subsequently on link B and link C. I can track this activity with some metadata associated (user, user organization, user role etc.)
I do not want it to be limited only to links. In future, I want to add number of similar usecases into the system and want to make it smart. E.g. If user set specific values for fields A and B, most likely he/she will set value <bla> for field C.
My system may generate several thousand such data points in a day (E.g. user clicks, field selection etc.).
Below are my questions:
How should I store my data? Go SQL or No SQL (I briefly looked into Mongo DB and it looked promising)
What tool should I use to perform the associative analysis? Are there any open source tools I can use?
It depend. Does your data suitable for NoSql databases? To answer this question it's better to read CAP Theorem and it's case studies: https://en.wikipedia.org/wiki/CAP_theorem or http://robertgreiner.com/2014/06/cap-theorem-explained/
. Some time you want Consistency(depending to your data) and Availability => so that it's better to use Relational Databases like Mysql(Try to read case studies and analyse your data to pick the best tools)
There is large number of open source libraries, but in my opinion it's better to first read some concepts and algorithms. Try searching for Apriori,ECLAT, FP-GROWTH Algorithms and get concepts of them. then you can pick a tool or write the code your self. Some usefull tools(depending to your programming language):
Python: https://github.com/asaini/Apriori, https://github.com/enaeseth/python-fp-growth, https://github.com/enaeseth/python-fp-growth/blob/master/fp_growth.py
PHP: https://github.com/sigidhanafi/fp-growth-php
JAVA: https://github.com/goodinges/FP-Growth-Java, http://www.philippe-fournier-viger.com/spmf/
Also you can use Spark: https://spark.apache.org/docs/1.1.1/mllib-guide.html
I am working on an app where I use core data.
I already have tried to do it with one entity but that didn't work.
But I now have like twenty entities and my question is: is there a limit to the number of entities or a number recommended?
Is there a better way to store that amount of data?
UPDATE:
What I am storing are grads from school but not the A,b,c,d,e,f but a number from 1 to 10. And each grad has is own weighing(amount of times a number count) like some grad count 2 time because the are more imported.
So i first thought to have an array with a string for the name of the subject and then to array one store's the grad the other the corresponding weighing.
Like this:
var subjects: [String,[Int],[Int]]
but this isn't possible and I don't even know how I should put this in core data and get it back properly.
Because I couldn't figure it out, I thought of just making a entity for each subject but there are a lot of them so there for this question.
There's no limit to the number of entities, but it's possible to go overboard and create more than you actually need. The recommended number is "as many as you need and no more", which obviously will vary a great deal depending on the nature of the data and how the app uses it. Whether there's a better way than your current approach is totally dependent on the fine details of exactly what you're doing, and so is impossible to answer without a far more detailed question.
You could setup a Subject entity that has one-to-many relationships to ordered sets of Grade and Weight, like so:
However, each grade apparently has a corresponding weight, so it would be more accurate to store each grade's weight in the Grade entity:
This still may not represent your real-world model.
If your subject is something general, like math or english, you could have more than one subject per grade, (e.g., algebra, geometry, trigonometry), or more than one level per subject (e.g., algebra 1, algebra 2) which may or may not have a different grade.
If your subject is very specific, your data may end up spread across unique one-to-one relationships, instead of one-to-many relationships.
You would also need to consider whether you can use ordered or unordered relationships, or whether an attribute exists that you can use to sort an entity.
You should consider these different facets of what you're trying to model (as well as the specific fetches you'd want to perform), before you try to design or implement the model, to allow you to efficiently represent this particular object graph.
Ex : i have a master file like
userid itemid rating
1 2 5
another user file, where user related metadata present, metadata could be many :
userid age
1 5
2 8
also i have a item file, where item related metadata present. meta data could be many
itemid item_catagory item_geo
1 5 india
if i will recommend any item to a user, i want to include these meta information.
i want to know whether matrix factorisation will be helpful and which python open source module has this kind of implementation available.
To include additional information to recommender system e.g. about an item, user or a recommendation event one of the existing methods for Context Aware Recommender System (CARS) can be applied.
Try starting with Factorization Machines. You can convert Your data to svmlight format and go with reference implementation libfm. It has the best documentation and there is a similar example where you can find how to prepare Your data. If it has to be Python, there are also many existing implementation, e.g. fastFM Simpler - pyFM. If You want to try Julia, maybe You can use my code of FactorizationMachines.jl as a base :-)
If You will be comfortable with standard FM, check out: LightFM and Field Aware FM
I think FM are one of the most successful and easy to use models for CARS. Please give some feedback how it works for You. If You need different approach I will be happy to help.