Using Google's Repo Tool - android-source

Simple Question: How do I download android operating system source code version 8.0.0 using the repo tool on linux mint?
Detailed:
I want to download android source code. Edit some of the code, then install it onto a device. I installed a linux operating system, and downloaded/initialized repo. However, for the life of me I cannot understand how to use REPO.
I use the operating system tag: OPR4.170623.009. Which is android-8.0.0_r16 Oreo
That is the following command:
repo sync [OPR4.170623.009]
I get this result
... A new version of repo (2.12) is available.
... You should upgrade soon:
cp /home/k/.repo/repo/repo /home/k/bin/repo
error: project [OPR4.170623.009] not found
I even tried
repo sync [<OPR4.170623.009>]
I got
bash: OPR4.170623.009: No such file or directory
It is very weird, because the 'Downloading the Source' page doesn't really one on how to actually download the source. (https://source.android.com/setup/build/downloading#initializing-a-repo-client). It makes is seem like I should be using sync, and the 'source code tags'. However it doesn't say how to put those two together:
Here:
repo sync [project0 project1 ... projectn]
repo sync [/path/to/project0 ... /path/to/projectn]
It shows some example, but that doesn't look anything like their tags?

The version you want to download has to be specified for repo init, not for repo sync. Also, the version is specified using the tag, not the build ID (the second column in this list).
So the steps you have to take would be as follows:
Initialize the repo with the build tag you want (for example android-8.0.0_r16):
repo init -u https://android.googlesource.com/platform/manifest -b android-8.0.0_r16
Synchronize the repo:
repo sync --jobs=32 --current-branch --no-tags --quiet
The additional flags passed to repo sync are not required, but might be helpful: The flag --jobs=32 will attempt 32 downloads in parallel (adjust to your network bandwidth). The flag --current-branch will download only the branch you have specified during repo init. The flag --no-tags will disable downloading of tag data. With the flag --quiet only the overall download progress will be shown.
Some general note: You indicated that you want to flash the image to a device. Note that your device will likely require device specific drivers to be included in the image. These drivers are generally not part of AOSP. Also, your device may have a locked boot loader that does not allow flashing custom images. I cannot give more details since I don't know the device you are targeting.

Related

rsync'ed outputs are always marked as outdated

Using {targets} to manage a workflow, which is great.
We don't have a proper cluster setup, but I have access to remote machines with much better specs than my laptop, so I can use git to keep the plan in sync locally and remotely.
When I want to work with something locally, I use rsync to move the files over.
rsync -avxP -e "ssh -p ..." remote:path/to/_targets .
When I query the remote cache with tar_network, I see that a bunch of my targets are "uptodate".
When I query the local cache after the rsync above, those same targets are "outdated".
I'm wondering if there is either better calls to rsync or certain arguments to tar_network(), or if this is a bug and the targets should stay as "uptodate" after an rsync like this?
OK, so I figured this out.
It was because I was being foolish about what was a target in this case. To make sure that a package dependency was being captured, I was using something that grabbed the entire DESCRIPTION of the package (packageDescription() I think). The problem with that is, is that when you install the package using remotes::install_github(), is that it adds some more information to the DESCRIPTION upon installation (packaged and built fields), and that information differed between the installation on my local machine and the installation on the remote machine.
What I really wanted was just the GithubSHA1 bit from the packageDescription(), to verify that I was using the same package at the same commit from my GitHub repo. Once I switched to using that instead of the entire DESCRIPTION, then targets had no issues with the meta information and things would stay current when rsync'ing them between machines.

Where to place custom packages that are being developed in github?

I want to get started developing my own packages. I am also adding version control via Github. I mainly develop on my Mac and a Windows laptop, but there is potential for me to develop on other machines down the line. My IDE of choice is PyCharm. I need to figure out where to place my packages both on Github and on my local machines so that my packages are always in sync regardless of where I am developing. Help??
First, let's clarify that git is the version control system, and Github is a platform for hosting git repositories (there are many other platforms aside from Github). You use git commands to manage your codes, and Github is where you store a copy of your codes.
By adding version control and putting a copy on Github, you've already taken the first step in managing your codes on different machines. All you need to do is to make sure the codes on Github is always the latest updated or maintained version.
Here's a sample workflow:
On machine 1 (Mac), clone a copy of the Github repo
Develop on machine 1
When you are satisfied with your changes, push your codes from machine 1 to Github
On machine 2 (Windows), clone a copy of the Github repo
Develop on machine 2
When you are satisfied with your changes, push your codes from machine 2 to Github
On machine 1, do a fetch to check for updates to the code
If there are updates, pull those changes to machine 1
Again, when done making changes, push them from machine 1 to Github
On machine 2 again, fetch and pull changes
Repeat this fetch-pull-push- cycle for all machines
Basically, you'll need to make sure that on wherever machine you are, when you are done, you should always push those changes to the remote (Github). So that other machines, can fetch and pull those changes and continue where you left off.
UPDATE (based on comment):
Once you've got the workflow for your package source codes, next is to package them like any other regular Python package and install them to your site-packages (either directly for your system or preferably in a virtual environment).
I recommend taking a look at the Python docs on Packaging Python Projects which uses setuptools to make your package compatible with pip.
Here's a sample workflow:
git clone <mypackage#github.com> # or git pull if you already cloned it before
cd mypackage
pip install -r requirements.txt
pip install -e . or pip install --user -e .
That last step will install your package to your site-packages folder, like any other pip-compatible package (assuming you've setup your setup.py file properly). If you are using virtual environments, you'll have to activate the virtual env first, then install your package there.
If you are not going to do any modification on the source code, and you just want to install the package on a specific machine, then you can also specify the Github URL to pip:
$ pip install -e git+https://git.repo/some_pkg.git#egg=SomeProject # from git
Lastly, if you are planning to upload this package to PyPi, check out the docs on Uploading the distribution archives. This just adds an extra step to your workflow of uploading your package to PyPi and then doing pip install from there next time.

Can a specific file from a specific github repo tree be installed with pip?

I need to install tensorflow_backend.py from a specific tree in keras, ideally without updating the other keras files. This shows how to install a specific repo branch, but how can the same be done for this single file in a tree?
No. pip installs Python packages. You need a different tool to download just one file. For a remote git repository git archive is a good tool. For a remote web interface use curl or wget.

Should I include configure and makefile in a github repository?

We recently moved from subversion to git, and then to Github, for several open source projects. Github was nice in that it provided a lot of functionality. One of the things I particularly like is the ability to download tags as zip or .tar.gz files.
Unfortunately Github recently discontinued downloads. That shouldn't be a problem because of the ability to download tags. However in the past we have not put a Makefile , configure script or any other autoconf-generated files into the repo because they get lots of conflicts when people merge.
What's the proper way to handle this?
Should I put autoconf and automake-generated files in the repo so people can download tags directly?
Or should there be a bootstrap.sh file and people are told to run that?
Or should I just do a make dist and put that into the repo?
Thanks
Publish the output of make dist via GitHub Releases
Your first option—putting the Autoconf- and Automake-generated files into the repository—is not a good idea. It's almost never beneficial to store generated files in source control. In this case, it's going to pollute your history with a lot of unnecessary and potentially conflicting commits, particularly if not all your contributors are using the same version of Autotools. Your third option—checking in the output of make dist—is a bad idea for exactly the same reasons as the first option.
Your second option—adding a "bootstrap" script that calls Autoconf and Automake to generate the configure scripts—is also a bad idea. This defeats the entire purpose of Autotools, which is to make your source portable across systems—including those for which Autotools is not available! (Consider what would happen if someone wanted to build and install your software on a machine on which they don't have root access, and where the GNU Build System is not installed. A bootstrap script is not going to help them because they'd first need to make a local installation of Autotools and possibly all its dependencies.)
The proper way of releasing code that uses Autotools is to produce a tarball with make dist (or better yet, make distcheck, since this will also run tests and do other sanity checks), and then publish this tarball somewhere other than the source repository.
Your original question, from April 2013, states that GitHub discontinued download pages. However, in July 2013, GitHub added a "Releases" feature that not only pre-packages your source tags, but also allows you to attach arbitrary files to each release. So on GitHub, the Releases page is where you should publish your make dist tarballs (and preferably also the detached GnuPG signatures of them).
Basic steps
When you are ready to make a release, tag it and push the tag to GitHub:
$ git tag 1.0 # Also use -s if desired
$ git push --tags
Use your Makefile to produce a tarball:
$ make dist # Alternatively, 'make distcheck'
Visit the GitHub page for your project and follow the "releases" link:
You will be taken to the Releases page for your project. The first time you visit, all you will see is a list of tags and automatically produced tarballs from the source tree:
Press the "Draft a new release" button.
You will then be presented with a form in which you should fill in the Git tag associated with the release and an optional title and description. Below this there is also a file selector labelled "Attach binaries by dropping them here or selecting them". Use this to upload the tarball you created in Step 2 (and maybe also a detached GnuPG signature of it).
When you're done, press the "Publish release" button.
Your project's Releases page will now display the release, including prominent download links for the attached files:
If you don't want to use GitHub Releases, then as pointed out in a previous answer, you should upload the tarballs somewhere else, such as your own website or FTP site. Add a link to this repository from your project's README.md so that users can find it.
The second is better: you want any user of your repo to be up and running as fast as possible, re-generating what he/she needs in order to build your program.
Since Git is very much a version control for text (as opposed to an artifact repo like Nexus), providing a way to generate the final binary is the way to go.
When you cut a release, upload the result of make distcheck to your project's download page: it's a makefile target that builds the tarball and verifies that it installs, uninstalls, passes tests and other sanity checks. Github being wrong-headed isn't an excuse: create a tree like this in your repo:
/
/source
/source/configure.ac
/source/Makefile.am
/source/...
/releases
/releases/foo-0.1.tar.gz
/releases/...
For developers, you should not have generated files in source control. Many modern autotooled projects bootstrap fine off an invocation of autoreconf -i.

Best practice for using one mercurial project in another

What are the best practices for using one mercurial project in another? I've got a django app that I'm working on, but I'm also using mercurial to version control a website that uses that app. I've looked at mercurial subrepositories, but apparently this is considered a "feature of last resort". Is there a good way of doing what I want to do, or do I just have to copy the code from my app into my website repo when I want to update to a new version of my app?
In your specific case I like to let pip handle my django application dependencies: http://guide.python-distribute.org/pip.html#installing-from-a-vcs
We have in our "website" repo a requirements.txt and our deploy does a pip install --upgrade -r requirements.txt That pulls the latest from the repo and installes it into the application's virtual env. This gives nice flexibility and separation while leaving the package management up to pip. With those VCS urls in pip you can point to a specific tag or branch too if you want different sites using different revisions from the same underlying repo.
pip also has a -e /path/to/file mode for pointing to an "editable" clone that's outside the website repo, which would work too, but I've not tried it.
That said, if you think subrepos fit your workflow better by all means use them. They work just fine, but people often get hung up on the workflow constraints ("What do you mean I can't commit my parent repo w/o also committing in the subrepo?!")