I was wondering if in a Federated Learning approach I need to split the local dataset in a number of batches equal to the number of communication rounds.
Otherwise I need to update locally on the whole local dataset each round.
Building a federated learning model
it depends on what you want to do. Federated learning is not a fixed method but a flexible approach that changes from solution to another and architecture to another i will try to make it clear by giving examples.
In Google keyboard for example they collect data in real-time so in each round there will be new data so in this case they are probably using the whole data for the update.
In another use-case you may have a huge local dataset and it takes ages to retrain the model locally so in this case you can train a subset in each round to reduce computation power and time needed to retrain the model.
Finally Federated leaning still have lot of challenges use it when it is really an obligation otherwise just adopt the normal centralized approach to train your model :)
I have a generic question around the best practises on usage of Neptune DB as a network database and its ability to scale up for complex computing. I want to develop a user recommendation system where incoming users on the platform are prompted other users they can likely follow in order to grow the network.
For implementing a simple technique like Triadic Closure, should I use gremlin queries on the Network DB(AWS Neptune in my case) for generating the recommendations? I believe in this case I would have to create python scripts that parallelise queries for multiple nodes and generate recommendation for each node at scale.
OR is it a more common practise to store the network data in the form of nodes, edges and their properties into a relational database, and then perform computations on the same by running SQL queries to load the network data into python, and then using packages like NetworkX on top of that. In this case I won't have to worry about batch computations since a relational database like Redshift would take care of it. However I would be writing python logics to implement techniques such as triadic closure.
Additionallly in the future I may want to use more complex graph computational techniques like graph clustering, partitioning, calculation of different kinds of centralities. Are all/any of these possible within the framework of Neptune+Gremlin.
With the above context below are the questions I am seeking answers for:
Whats is the commonly used tech stack by a data science team working with graph data to build solutions such as user recommendations? By data-science tech stack I mean technologies that help query, analyse, visualise, compute and serve.
Can Neptune + Gremlin replace python packages such as NetworkX for network analysis and centrality measurement?
Is Neptune DB ideal only as a data store OR can it also support complex network analysis and recommendation serving?
Any insight/resources on this would be really helpful!
It is definitely possible to do triadic closure in Gremlin. I have also seen data scientists use both NetworkX and Gremlin together by running the gremlin-python client in a Jupyter Notebook. As this question is quite specific to Amazon Neptune you may want to post to the Neptune support forum at [1]. There are also some useful Gremlin Recipes at [2]
If you post to the support forum I am sure someone will respond.
[1] https://forums.aws.amazon.com/forum.jspa?forumID=253&start=0
[2] http://tinkerpop.apache.org/docs/current/recipes/
I wanted to check the source code of the distributed training feature of tensorflow and its overall structure. Worker-PS relations, etc. However I am lost in tensorflow's repository. Can someone guide me through the repository and point the source code I am looking for?
Unfortunately, not all tensorflow code (especially the part related to distributed computation) is open source. To quote Aurélien Géron from Hands-On Machine Learning with Scikit-Learn and TensorFlow:
The TensorFlow whitepaper presents a friendly dynamic placer algorithm that auto-magically distributes operations across all available devices, taking into account things like the measured computation time in previous runs of the graph, estimations of the size of the input and output tensors to each operation, the amount of RAM available in each device, communication delay when transferring data in and out of devices, hints and constraints from the user, and more. Unfortunately, this sophisticated algorithm is internal to Google; it was not released in the open source version of TensorFlow.
But here are the main entry points of TF distributed in the public repo:
Cluster in tensorflow/python/grappler/cluster.py
Server and ClusterSpec in tensorflow/python/training/server_lib.py
worker_service.proto in tensorflow/core/protobuf/worker_service.proto
To dive deep you'll need to enter native C++ code in tensorflow/core/distributed_runtime package, e.g., here's gRPC server implementation.
I'm using open source Tensorflow implementations of research papers, for example DCGAN-tensorflow. Most of the libraries I'm using are configured to train the model locally, but I want to use Google Cloud ML to train the model since I don't have a GPU on my laptop. I'm finding it difficult to change the code to support GCS buckets. At the moment, I'm saving my logs and models to /tmp and then running a 'gsutil' command to copy the directory to gs://my-bucket at the end of training (example here). If I try saving the model directly to gs://my-bucket it never shows up.
As for training data, one of the tensorflow samples copies data from GCS to /tmp for training (example here), but this only works when the dataset is small. I want to use celebA, and it is too large to copy to /tmp every run. Is there any documentation or guides for how to go about updating code that trains locally to use Google Cloud ML?
The implementations are running various versions of Tensorflow, mainly .11 and .12
There is currently no definitive guide. The basic idea would be to replace all occurrences of native Python file operations with equivalents in the file_io module, most notably:
open() -> file_io.FileIO()
os.path.exists() -> file_io.file_exists()
glob.glob() ->
file_io.get_matching_files()
These functions will work locally and on GCS (as well as any registered file system). Note, however, that there are some slight differences in file_io and the standard file operations (e.g., a different set of 'modes' are supported).
Fortunately, checkpoint and summary writing do work out of the box, just be sure to pass a GCS path to tf.train.Saver.save and tf.summary.FileWriter.
In the sample you sent, that looks potentially painful. Consider monkey patching the Python functions to map to the TensorFlow equivalents when the program starts to only have to do it once (demonstrated here).
As a side note, all of the samples on this page show reading files from GCS.
I am architecting a social-network, incorporating various features, many powered by big-data intensive workloads (such as Machine Learning). E.g.: recommender systems, search-engines and time-series sequence matchers.
Given that I currently have 5< users—but forsee significant growth—what metrics should I use to decide between:
Spark (with/without HBase over Hadoop)
MongoDB or Postgres
Looking at Postgres as a means of reducing porting pressure between it and Spark (use a SQL abstraction layer which works on both). Spark seems quite interesting, can imagine various ML, SQL and Graph questions it can be made to answer speedily. MongoDB is what I usually use, but I've found its scaling and map-reduce features to be quite limiting.
I think you are on the right direction to search for software stack/architecture which can:
handle different types of load: batch, real time computing etc.
scale in size and speed along with business growth
be a live software stack which are well maintained and supported
have common library support for domain specific computing such as machine learning, etc.
To those merits, Hadoop + Spark can give you the edges you need. Hadoop is relatively mature for now to handle large scale data in a batch manner. It supports reliable and scalable storage(HDFS) and computation(Mapreduce/Yarn). With the addition of Spark, you can leverage storage (HDFS) plus real-time computing (performance) added by Spark.
In terms of development, both systems are natively supported by Java/Scala. Library support, performance tuning of those are abundant here in stackoverflow and everywhere else. There are at least a few machine learning libraries(Mahout, Mlib) working with hadoop, spark.
For deployment, AWS and other cloud provider can provide host solution for hadoop/spark. Not an issue there either.
I guess you should separate data storage and data processing. In particular, "Spark or MongoDB?" is not a good thing to ask, but rather "Spark or Hadoop or Storm?" and also "MongoDB or Postgres or HDFS?"
In any case, I would refrain from having the database do processing.
I have to admit that I'm a little biased but if you want to learn something new, you have serious spare time, you're willing to read a lot, and you have the resources (in terms of infrastructure), go for HBase*, you won't regret it. A whole new universe of possibilities and interesting features open up when you can have +billions of atomic counters in real time.
*Alongside Hadoop, Hive, Spark...
In my opinion, it depends more on your requirements and the data volume you will have than the number of users -which is also a requirement-. Hadoop (aka Hive/Impala, HBase, MapReduce, Spark, etc.) works fine with big amounts -GB/TB per day- of data and scales horizontally very well.
In the Big Data environments I have worked with I have always used Hadoop HDFS to store raw data and leverage the distributed file system to analyse the data with Apache Spark. The results were stored in a database system like MongoDB to obtain low latency queries or fast aggregates with many concurrent users. Then we used Impala to get on demmand analytics. The main question when using so many technologies is to scale well the infraestructure and the resources given to each one. For example, Spark and Impala consume a lot of memory (they are in memory engines) so it's a bad idea to put a MongoDB instance on the same machine.
I would also suggest you a graph database since you are building a social network architecture; but I don't have any experience with this...
Are you looking to stay purely open-sourced? If you are going to go enterprise at some point, a lot of the enterprise distributions of Hadoop include Spark analytics bundled in.
I have a bias, but, there is also the Datastax Enterprise product, which bundles Cassandra, Hadoop and Spark, Apache SOLR, and other components together. It is in use at many of the major internet entities, specifically for the applications you mention. http://www.datastax.com/what-we-offer/products-services/datastax-enterprise
You want to think about how you will be hosting this as well.
If you are staying in the cloud, you will not have to choose, you will be able to (depending on your cloud environment, but, with AWS for example) use Spark for continuous-batch process, Hadoop MapReduce for long-timeline analytics (analyzing data accumulated over a long time), etc., because the storage will be decoupled from the collection and processing. Put data in S3, and then process it later with whatever engine you need to.
If you will be hosting the hardware, building a Hadoop cluster will give you the ability to mix hardware (heterogenous hardware supported by the framework), will give you a robust and flexible storage platform and a mix of tools for analysis, including HBase and Hive, and has ports for most of the other things you've mentioned, such as Spark on Hadoop (not a port, actually the original design of Spark.) It is probably the most versatile platform, and can be deployed/expanded cheaply, since the hardware does not need to be the same for every node.
If you are self-hosting, going with other cluster options will force hardware requirements on you that may be difficult to scale with later.
We use Spark +Hbase + Apache Phoenix + Kafka +ElasticSearch and scaling has been easy so far.
*Phoenix is a JDBC driver for Hbase, it allows to use java.sql with hbase, spark (via JDBCrdd) and ElasticSearch (via JDBC river), it really simplifies integration.