How to load multiple osm files into Nominatim - openstreetmap

I need to figure out the process to load multiple OSM files into a Nominatim database. I have everything setup and can load a single file with no issues.
Basically what I'm trying to do is load some of the GeoFabrik OSM files for only a part of the world. So I'm grabbing like the North America and South America OSM files. Or any 2 on their site.
For the first load I use the setup.php:
./utils/setup.php --osm-file file.osm --all --osm2pgsql-cache 4000
I'm not sure if I have another file (file2.osm) how to load this into the database and keep the original data.
Basically, I just want pieces of the world and I only need to load data like every six months or so. I don't need daily updates/ etc...
I need to split the files up because it just takes too long to load and I want to manage it better.
Can I use the update.php..... But not sure what parameters.
I thought about loading all data with update and the no-index clause...Then maybe building the index??
I did try to re-run the setup.php for the second file but it just hung for a long time
For second file
./utils/setup.php --import-data --osm-file file2.osm --osm2pgsql-cache 4000
But this just hangs on Setting up table: planet_osm_ways. (I tested very small OSM files that should finish within minutes but it just hangs).
The files that I'm using are all non-intersecting so not truly updates. SO I have a North America and a South America...How do I load both into Nominatim separately.
Thanks

The answer can be found at help.openstreetmap.org.
First you need to import it via the update script: ./utils/update.php --import-file <yourfile>. Then you need to trigger a re-indexing of the data: ./utils/update.php --index
But according to lonvia (one of the Nominatim developers) this will be very slow and it is better if you merge all your files first and then import it as one large file.

Sample Merging Code, merging Andorra, Malta and Liechtenstein,
curl -L 'http://download.geofabrik.de/europe/andorra-latest.osm.pbf' --create-dirs -o /srv/nominatim/src/andorra.osm.pbf
curl -L 'http://download.geofabrik.de/europe/malta-latest.osm.pbf' --create-dirs -o /srv/nominatim/src/malta.osm.pbf
curl -L 'http://download.geofabrik.de/europe/liechtenstein-latest.osm.pbf' --create-dirs -o /srv/nominatim/src/liechtenstein.osm.pbf
osmconvert /srv/nominatim/src/andorra.osm.pbf -o=/srv/nominatim/src/andorra.o5m
osmconvert /srv/nominatim/src/malta.osm.pbf -o=/srv/nominatim/src/malta.o5m
osmconvert /srv/nominatim/src/liechtenstein.osm.pbf -o=/srv/nominatim/src/liechtenstein.o5m
osmconvert /srv/nominatim/src/andorra.o5m /srv/nominatim/src/malta.o5m /srv/nominatim/src/liechtenstein.o5m -o=/srv/nominatim/src/data.o5m
osmconvert /srv/nominatim/src/data.o5m -o=/srv/nominatim/src/data.osm.pbf;
More about OsmConvert -> https://wiki.openstreetmap.org/wiki/Osmconvert
Once merged, you can,
sudo -u nominatim /srv/Nominatim/build/utils/setup.php \
--osm-file /srv/nominatim/src/data.osm.pbf \
--all \
--threads ${BUILD_THREADS} \ # 16 Threads?
--osm2pgsql-cache ${OSM2PGSQL_CACHE} # 24000 ?

Related

Getting HASH of individual files within folder uploaded to IPFS

When I upload a folder of .jpg files to IPFS, I get the HASH of that folder - which is cool.
But is each individual file in that folder also getting hashed?
And if so, how do I get the hash of each file?
I basically want to be able to upload a whole bunch of files - like 500 images - and do it all at once, or programmatically, and have the hash of each file be returned to me.
Any way to do this?
Yes! From the command line you get back the CIDs (the Content IDentifier, aka, IPFS hash) for each file added when you run ipfs add -r <path to directory>
$ ipfs add -r gifs
added QmfBAEYhJp9ZjGvv8utB3Yv8uuuxsDKjv9rurkHRsYU3ih gifs/martian-iron-man.gif
added QmRBHTH3p4W2xAzgLxvdh8VJvAmWBgchwCr9G98EprwetE gifs/needs-more-dogs.gif
added QmZbffnCcV598QxsUy7WphXCAMZJULZAzy94tuFZzbFcdK gifs/satisfied-with-your-care.gif
added QmTxnmk85ESr97j2xLNFeVZW2Kk9FquhdswofchF8iDGFg gifs/stone-of-triumph.gif
added QmcN71Qh56oSg2YXsEXuf8o6u5CrBXbyYYzgMyAkdkcxxK gifs/thanks-dog.gif
added QmTnuLaivKc1Aj8LBf2iWBHDXsmedip3zSPbQcGi6BFwTC gifs
the root CID for the directory is always the last item in the list.
You can limit the output of that command to just include the CIDs using the --quiet flag
⨎ ipfs add -r gifs --quiet
QmfBAEYhJp9ZjGvv8utB3Yv8uuuxsDKjv9rurkHRsYU3ih
QmRBHTH3p4W2xAzgLxvdh8VJvAmWBgchwCr9G98EprwetE
QmZbffnCcV598QxsUy7WphXCAMZJULZAzy94tuFZzbFcdK
QmTxnmk85ESr97j2xLNFeVZW2Kk9FquhdswofchF8iDGFg
QmcN71Qh56oSg2YXsEXuf8o6u5CrBXbyYYzgMyAkdkcxxK
QmTnuLaivKc1Aj8LBf2iWBHDXsmedip3zSPbQcGi6BFwTC
Or, if you know the CID for a directory, you can list out the files it contains and their individual CIDs with ipfs ls. Here I list out the contents of the gifs dir from the previous example
$ ipfs ls QmTnuLaivKc1Aj8LBf2iWBHDXsmedip3zSPbQcGi6BFwTC
QmfBAEYhJp9ZjGvv8utB3Yv8uuuxsDKjv9rurkHRsYU3ih 2252675 martian-iron-man.gif
QmRBHTH3p4W2xAzgLxvdh8VJvAmWBgchwCr9G98EprwetE 1233669 needs-more-dogs.gif
QmZbffnCcV598QxsUy7WphXCAMZJULZAzy94tuFZzbFcdK 1395067 satisfied-with-your-care.gif
QmTxnmk85ESr97j2xLNFeVZW2Kk9FquhdswofchF8iDGFg 1154617 stone-of-triumph.gif
QmcN71Qh56oSg2YXsEXuf8o6u5CrBXbyYYzgMyAkdkcxxK 2322454 thanks-dog.gif
You can it programatically with the core api in js-ipfs or go-ipfs. Here is an example of adding a files from the local file system in node.js using js-ipfs from the docs for ipfs.addAll(files) - https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#importing-files-from-the-file-system
There is a super helpful video on how adding files to IPFS works over at https://www.youtube.com/watch?v=Z5zNPwMDYGg
And a walk through of js-ipfs here https://github.com/ipfs/js-ipfs/tree/master/examples/ipfs-101

Impossible to merge osm.pbf file properly

I've recently started to work on a project with SRTM datas and I've extracted pbf file using phyghtmap.
To start I'm getting hgt files, converting them to tif using the following command : gdal_fillnodata.py data.hgt data.tif
Then I'm warping them with gdalwarp -co BIGTIFF=YES -co TILED=YES -co COMPRESS=LZW -co PREDICTOR=2 -t_srs "+proj=merc +ellps=sphere +R=6378137 +a=6378137 +units=m" -r bilinear -tr 90 90 data.tif warp-90.tif
And finally creating the pbf file with phyghtmap --max-nodes-per-tile=0 -s 10 -0 --pbf warp-90.tif
The results is a list of pbf files. They are perfectly fine when I load them into PostGIS with osm2pgsql. But I want to merge them to fasten the import.
I've tried all the major solutions:
osmium merge *.pbf -o merged.pbf
convert pbf to o5m then osmconvert64 *.o5m -o=merge.o5m then convert back to pbf
merging two by two with osmosis --read-pbf lon4.00_5.00lat44.00_45.00_local-source.pbf --read-pbf lon5.00_6.00lat44.00_45.00_local-source.osm.pbf --merge --write-pbf osmo_merge.osm.pbf
None of them worked and the result is only a very small fraction of the data merged in the result file.
Am I doing something wrong?
Note: If I load all pbf with --append it works, but it take ages for a very small portion of the world.
I found the issue. I wasn't setting the --start-node-id and --start-way-id in my script so all my pbf was using the same id range. Now I'm assigning unique ID and it work like a charm :)

Tokenizer in moses-SMT system stuck even with 10 sentences

I was trying to make a baseline MT system. Just for checking How it works I made Source (S) and Target (T) language corpus of just 2000 sentences. The very first step is to prepare the data for Machine Translation (MT) system. In this step first we have to perform tokenization as mentioned here Baseline SMT. I've used this code:
~/mosesdecoder/scripts/tokenizer/tokenizer.perl -l en \
< ~/corpus/training/news-commentary-v8.fr-en.en \
> ~/corpus/news-commentary-v8.fr-en.tok.en
~/mosesdecoder/scripts/tokenizer/tokenizer.perl -l fr \
< ~/corpus/training/news-commentary-v8.fr-en.fr \
> ~/corpus/news-commentary-v8.fr-en.tok.fr
( say S = French & T = English)
I checked after 2 hours it was still running. I got curious since it was not expected. Then I tried with just ten sentences. To my surprise, it's been 30 minutes and it is still running.
Did I do anything wrong?
PS: OS = Ubuntu 14.04.5 LTS
Sony ultrabook
No dual boot.
Please Follow bellow steps ;
git clone https://github.com/moses-smt/mosesdecoder.git
cd mosesdecoder
git clone https://github.com/moses-smt/giza-pp.git
cd giza-pp
make
mkdir tools
cp giza-pp/GIZA++-v2/GIZA++ giza-pp/GIZA++-v2/snt2cooc.out giza-pp/mkcls-v2/mkcls tools
scripts/tokenizer/tokenizer.perl -l fr < ~/corpus/training/news-commentary-v8.fr-en.fr > ~/corpus/news-commentary-v8.fr-en.tok.fr

Can you get the number of lines of code from a GitHub repository?

In a GitHub repository you can see “language statistics”, which displays the percentage of the project that’s written in a language. It doesn’t, however, display how many lines of code the project consists of. Often, I want to quickly get an impression of the scale and complexity of a project, and the count of lines of code can give a good first impression. 500 lines of code implies a relatively simple project, 100,000 lines of code implies a very large/complicated project.
So, is it possible to get the lines of code written in the various languages from a GitHub repository, preferably without cloning it?
The question “Count number of lines in a git repository” asks how to count the lines of code in a local Git repository, but:
You have to clone the project, which could be massive. Cloning a project like Wine, for example, takes ages.
You would count lines in files that wouldn’t necessarily be code, like i13n files.
If you count just (for example) Ruby files, you’d potentially miss massive amount of code in other languages, like JavaScript. You’d have to know beforehand which languages the project uses. You’d also have to repeat the count for every language the project uses.
All in all, this is potentially far too time-intensive for “quickly checking the scale of a project”.
You can run something like
git ls-files | xargs wc -l
Which will give you the total count →
You can also add more instructions. Like just looking at the JavaScript files.
git ls-files | grep '\.js' | xargs wc -l
Or use this handy little tool → https://line-count.herokuapp.com/
A shell script, cloc-git
You can use this shell script to count the number of lines in a remote Git repository with one command:
#!/usr/bin/env bash
git clone --depth 1 "$1" temp-linecount-repo &&
printf "('temp-linecount-repo' will be deleted automatically)\n\n\n" &&
cloc temp-linecount-repo &&
rm -rf temp-linecount-repo
Installation
This script requires CLOC (“Count Lines of Code”) to be installed. cloc can probably be installed with your package manager – for example, brew install cloc with Homebrew. There is also a docker image published under mribeiro/cloc.
You can install the script by saving its code to a file cloc-git, running chmod +x cloc-git, and then moving the file to a folder in your $PATH such as /usr/local/bin.
Usage
The script takes one argument, which is any URL that git clone will accept. Examples are https://github.com/evalEmpire/perl5i.git (HTTPS) or git#github.com:evalEmpire/perl5i.git (SSH). You can get this URL from any GitHub project page by clicking “Clone or download”.
Example output:
$ cloc-git https://github.com/evalEmpire/perl5i.git
Cloning into 'temp-linecount-repo'...
remote: Counting objects: 200, done.
remote: Compressing objects: 100% (182/182), done.
remote: Total 200 (delta 13), reused 158 (delta 9), pack-reused 0
Receiving objects: 100% (200/200), 296.52 KiB | 110.00 KiB/s, done.
Resolving deltas: 100% (13/13), done.
Checking connectivity... done.
('temp-linecount-repo' will be deleted automatically)
171 text files.
166 unique files.
17 files ignored.
http://cloc.sourceforge.net v 1.62 T=1.13 s (134.1 files/s, 9764.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Perl 149 2795 1425 6382
JSON 1 0 0 270
YAML 2 0 0 198
-------------------------------------------------------------------------------
SUM: 152 2795 1425 6850
-------------------------------------------------------------------------------
Alternatives
Run the commands manually
If you don’t want to bother saving and installing the shell script, you can run the commands manually. An example:
$ git clone --depth 1 https://github.com/evalEmpire/perl5i.git
$ cloc perl5i
$ rm -rf perl5i
Linguist
If you want the results to match GitHub’s language percentages exactly, you can try installing Linguist instead of CLOC. According to its README, you need to gem install linguist and then run linguist. I couldn’t get it to work (issue #2223).
I created an extension for Google Chrome browser - GLOC which works for public and private repos.
Counts the number of lines of code of a project from:
project detail page
user's repositories
organization page
search results page
trending page
explore page
If you go to the graphs/contributors page, you can see a list of all the contributors to the repo and how many lines they've added and removed.
Unless I'm missing something, subtracting the aggregate number of lines deleted from the aggregate number of lines added among all contributors should yield the total number of lines of code in the repo. (EDIT: it turns out I was missing something after all. Take a look at orbitbot's comment for details.)
UPDATE:
This data is also available in GitHub's API. So I wrote a quick script to fetch the data and do the calculation:
'use strict';
async function countGithub(repo) {
const response = await fetch(`https://api.github.com/repos/${repo}/stats/contributors`)
const contributors = await response.json();
const lineCounts = contributors.map(contributor => (
contributor.weeks.reduce((lineCount, week) => lineCount + week.a - week.d, 0)
));
const lines = lineCounts.reduce((lineTotal, lineCount) => lineTotal + lineCount);
window.alert(lines);
}
countGithub('jquery/jquery'); // or count anything you like
Just paste it in a Chrome DevTools snippet, change the repo and click run.
Disclaimer (thanks to lovasoa):
Take the results of this method with a grain of salt, because for some repos (sorich87/bootstrap-tour) it results in negative values, which might indicate there's something wrong with the data returned from GitHub's API.
UPDATE:
Looks like this method to calculate total line numbers isn't entirely reliable. Take a look at orbitbot's comment for details.
You can clone just the latest commit using git clone --depth 1 <url> and then perform your own analysis using Linguist, the same software Github uses. That's the only way I know you're going to get lines of code.
Another option is to use the API to list the languages the project uses. It doesn't give them in lines but in bytes. For example...
$ curl https://api.github.com/repos/evalEmpire/perl5i/languages
{
"Perl": 274835
}
Though take that with a grain of salt, that project includes YAML and JSON which the web site acknowledges but the API does not.
Finally, you can use code search to ask which files match a given language. This example asks which files in perl5i are Perl. https://api.github.com/search/code?q=language:perl+repo:evalEmpire/perl5i. It will not give you lines, and you have to ask for the file size separately using the returned url for each file.
Not currently possible on Github.com or their API-s
I have talked to customer support and confirmed that this can not be done on github.com. They have passed the suggestion along to the Github team though, so hopefully it will be possible in the future. If so, I'll be sure to edit this answer.
Meanwhile, Rory O'Kane's answer is a brilliant alternative based on cloc and a shallow repo clone.
From the #Tgr's comment, there is an online tool :
https://codetabs.com/count-loc/count-loc-online.html
You can use tokei:
cargo install tokei
git clone --depth 1 https://github.com/XAMPPRocky/tokei
tokei tokei/
Output:
===============================================================================
Language Files Lines Code Comments Blanks
===============================================================================
BASH 4 48 30 10 8
JSON 1 1430 1430 0 0
Shell 1 49 38 1 10
TOML 2 78 65 4 9
-------------------------------------------------------------------------------
Markdown 4 1410 0 1121 289
|- JSON 1 41 41 0 0
|- Rust 1 47 38 5 4
|- Shell 1 19 16 0 3
(Total) 1517 95 1126 296
-------------------------------------------------------------------------------
Rust 19 3750 3123 119 508
|- Markdown 12 358 5 302 51
(Total) 4108 3128 421 559
===============================================================================
Total 31 6765 4686 1255 824
===============================================================================
Tokei has support for badges:
Count Lines
[![](https://tokei.rs/b1/github/XAMPPRocky/tokei)](https://github.com/XAMPPRocky/tokei)
By default the badge will show the repo's LoC(Lines of Code), you can also specify for it to show a different category, by using the ?category= query string. It can be either code, blanks, files, lines, comments.
Count Files
[![](https://tokei.rs/b1/github/XAMPPRocky/tokei?category=files)](https://github.com/XAMPPRocky/tokei)
You can use GitHub API to get the sloc like the following function
function getSloc(repo, tries) {
//repo is the repo's path
if (!repo) {
return Promise.reject(new Error("No repo provided"));
}
//GitHub's API may return an empty object the first time it is accessed
//We can try several times then stop
if (tries === 0) {
return Promise.reject(new Error("Too many tries"));
}
let url = "https://api.github.com/repos" + repo + "/stats/code_frequency";
return fetch(url)
.then(x => x.json())
.then(x => x.reduce((total, changes) => total + changes[1] + changes[2], 0))
.catch(err => getSloc(repo, tries - 1));
}
Personally I made an chrome extension which shows the number of SLOC on both github project list and project detail page. You can also set your personal access token to access private repositories and bypass the api rate limit.
You can download from here https://chrome.google.com/webstore/detail/github-sloc/fkjjjamhihnjmihibcmdnianbcbccpnn
Source code is available here https://github.com/martianyi/github-sloc
Hey all this is ridiculously easy...
Create a new branch from your first commit
When you want to find out your stats, create a new PR from main
The PR will show you the number of changed lines - as you're doing a PR from the first commit all your code will be counted as new lines
And the added benefit is that if you don't approve the PR and just leave it in place, the stats (No of commits, files changed and total lines of code) will simply keep up-to-date as you merge changes into main. :) Enjoy.
Firefox add-on Github SLOC
I wrote a small firefox addon that prints the number of lines of code on github project pages: Github SLOC
npm install sloc -g
git clone --depth 1 https://github.com/vuejs/vue/
sloc ".\vue\src" --format cli-table
rm -rf ".\vue\"
Instructions and Explanation
Install sloc from npm, a command line tool (Node.js needs to be installed).
npm install sloc -g
Clone shallow repository (faster download than full clone).
git clone --depth 1 https://github.com/facebook/react/
Run sloc and specifiy the path that should be analyzed.
sloc ".\react\src" --format cli-table
sloc supports formatting the output as a cli-table, as json or csv. Regular expressions can be used to exclude files and folders (Further information on npm).
Delete repository folder (optional)
Powershell: rm -r -force ".\react\" or on Mac/Unix: rm -rf ".\react\"
Screenshots of the executed steps (cli-table):
sloc output (no arguments):
It is also possible to get details for every file with the --details option:
sloc ".\react\src" --format cli-table --details
Open terminal and run the following:
curl -L "https://api.codetabs.com/v1/loc?github=username/reponame"
If the question is "can you quickly get NUMBER OF LINES of a github repo", the answer is no as stated by the other answers.
However, if the question is "can you quickly check the SCALE of a project", I usually gauge a project by looking at its size. Of course the size will include deltas from all active commits, but it is a good metric as the order of magnitude is quite close.
E.g.
How big is the "docker" project?
In your browser, enter api.github.com/repos/ORG_NAME/PROJECT_NAME
i.e. api.github.com/repos/docker/docker
In the response hash, you can find the size attribute:
{
...
size: 161432,
...
}
This should give you an idea of the relative scale of the project. The number seems to be in KB, but when I checked it on my computer it's actually smaller, even though the order of magnitude is consistent. (161432KB = 161MB, du -s -h docker = 65MB)
Pipe the output from the number of lines in each file to sort to organize files by line count.
git ls-files | xargs wc -l |sort -n
This is so easy if you are using Vscode and you clone the project first. Just install the Lines of Code (LOC) Vscode extension and then run LineCount: Count Workspace Files from the Command Pallete.
The extension shows summary statistics by file type and it also outputs result files with detailed information by each folder.
There in another online tool that counts lines of code for public and private repos without having to clone/download them - https://klock.herokuapp.com/
None of the answers here satisfied my requirements. I only wanted to use existing utilities. The following script will use basic utilities:
Git
GNU or BSD awk
GNU or BSD sed
Bash
Get total lines added to a repository (subtracts lines deleted from lines added).
#!/bin/bash
git diff --shortstat 4b825dc642cb6eb9a060e54bf8d69288fbee4904 HEAD | \
sed 's/[^0-9,]*//g' | \
awk -F, '!($2 > 0) {$2="0"};!($3 > 0) {$3="0"}; {print $2-$3}'
Get lines of code filtered by specified file types of known source code (e.g. *.py files or add more extensions, etc).
#!/bin/bash
git diff --shortstat 4b825dc642cb6eb9a060e54bf8d69288fbee4904 HEAD -- *.{py,java,js} | \
sed 's/[^0-9,]*//g' | \
awk -F, '!($2 > 0) {$2="0"};!($3 > 0) {$3="0"}; {print $2-$3}'
4b825dc642cb6eb9a060e54bf8d69288fbee4904 is the id of the "empty tree" in Git and it's always available in every repository.
Sources:
My own scripting
How to get Git diff of the first commit?
Is there a way of having git show lines added, lines changed and lines removed?
shields.io has a badge that can count up all the lines for you here. Here is an example of what it looks like counting the Raycast extensions repo:
You can use sourcegraph, an open source search engine for code. It can connect to your GitHub account, index the content, and then on the admin section you would see the number of lines of code indexed.
I made an NPM package specifically for this usage, which allows you to call a CLI tool and providing the directory path and the folders/files to ignore
it goes like this:
npm i -g #quasimodo147/countlines
to get the $ countlines command in your terminal
then you can do
countlines . node_modules build dist

wget files from FTP-like listings

So, site that used to use FTP now has an HTTP front-end and won't allow FTP connections. The site in question (for an example directory) will show a page with links to different dates. Inside each of these different dates, there are many files, and I typically just need to get some file with some clear pattern e.g. *h17v04*.hdf. I thought this could work:
wget -I "${PLATFORM}/${PRODUCT}/${YEAR}.*" -r -l 4 \
--user-agent="Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1" \
--verbose -c -np -nc -nd \
-A "*h17v04*.hdf" http://e4ftl01.cr.usgs.gov/$PLATFORM/$PRODUCT/
where PLATFORM=MOLT, PRODUCT=MOD09GA.005 and YEAR=2004, for example. This seems to start looking into all the useful dates, finds the index.html, and then just skips to the next directory, without downloading the relevant hdf file:
--2013-06-14 13:09:18-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/
Reusing existing connection to e4ftl01.cr.usgs.gov:80.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html'
[ <=> ] 174,182 134K/s in 1.3s
2013-06-14 13:09:20 (134 KB/s) - `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html' saved [174182]
Removing e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html since it should be rejected.
--2013-06-14 13:09:20-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.02/
[...]
If I ignore the -A option, only the index.html file is downloaded to my system, but it appears it's not parsed and the links are not followed. I don't really know what more is required to make this work, as I can't see why it doesn't!!!
SOLUTION
In the end, the problem was due to an old bug in the local version of wget. However, I ended up writing my own script for downloading MODIS data from the server above. The script is pure Python, and is available from here.
Consider to use pyModis instead of wget which is a Free and Open Source Python based library to work with MODIS data. It offers bulk-download for user selected time ranges, mosaicking of MODIS tiles, and the reprojection from Sinusoidal to other projections, convert HDF format to other formats. See
http://www.pymodis.org/