How to clone Raspberry-pi OS to Image (e.g ISO)? - raspberry-pi

I need a tool to clone only the used parts of my Raspberry-pi OS to a an Image(e.g ISO). Is there any tool to do it?
I tried with dd (Linux command) however, it clones the whole disk to an ISO and not only the used part.
I tried PiClone which did clone only the used parts of the disk, however it does not save the clone to an image, it can only Clone to a chip.

Related

Using Google's Repo Tool

Simple Question: How do I download android operating system source code version 8.0.0 using the repo tool on linux mint?
Detailed:
I want to download android source code. Edit some of the code, then install it onto a device. I installed a linux operating system, and downloaded/initialized repo. However, for the life of me I cannot understand how to use REPO.
I use the operating system tag: OPR4.170623.009. Which is android-8.0.0_r16 Oreo
That is the following command:
repo sync [OPR4.170623.009]
I get this result
... A new version of repo (2.12) is available.
... You should upgrade soon:
cp /home/k/.repo/repo/repo /home/k/bin/repo
error: project [OPR4.170623.009] not found
I even tried
repo sync [<OPR4.170623.009>]
I got
bash: OPR4.170623.009: No such file or directory
It is very weird, because the 'Downloading the Source' page doesn't really one on how to actually download the source. (https://source.android.com/setup/build/downloading#initializing-a-repo-client). It makes is seem like I should be using sync, and the 'source code tags'. However it doesn't say how to put those two together:
Here:
repo sync [project0 project1 ... projectn]
repo sync [/path/to/project0 ... /path/to/projectn]
It shows some example, but that doesn't look anything like their tags?
The version you want to download has to be specified for repo init, not for repo sync. Also, the version is specified using the tag, not the build ID (the second column in this list).
So the steps you have to take would be as follows:
Initialize the repo with the build tag you want (for example android-8.0.0_r16):
repo init -u https://android.googlesource.com/platform/manifest -b android-8.0.0_r16
Synchronize the repo:
repo sync --jobs=32 --current-branch --no-tags --quiet
The additional flags passed to repo sync are not required, but might be helpful: The flag --jobs=32 will attempt 32 downloads in parallel (adjust to your network bandwidth). The flag --current-branch will download only the branch you have specified during repo init. The flag --no-tags will disable downloading of tag data. With the flag --quiet only the overall download progress will be shown.
Some general note: You indicated that you want to flash the image to a device. Note that your device will likely require device specific drivers to be included in the image. These drivers are generally not part of AOSP. Also, your device may have a locked boot loader that does not allow flashing custom images. I cannot give more details since I don't know the device you are targeting.

Where to place custom packages that are being developed in github?

I want to get started developing my own packages. I am also adding version control via Github. I mainly develop on my Mac and a Windows laptop, but there is potential for me to develop on other machines down the line. My IDE of choice is PyCharm. I need to figure out where to place my packages both on Github and on my local machines so that my packages are always in sync regardless of where I am developing. Help??
First, let's clarify that git is the version control system, and Github is a platform for hosting git repositories (there are many other platforms aside from Github). You use git commands to manage your codes, and Github is where you store a copy of your codes.
By adding version control and putting a copy on Github, you've already taken the first step in managing your codes on different machines. All you need to do is to make sure the codes on Github is always the latest updated or maintained version.
Here's a sample workflow:
On machine 1 (Mac), clone a copy of the Github repo
Develop on machine 1
When you are satisfied with your changes, push your codes from machine 1 to Github
On machine 2 (Windows), clone a copy of the Github repo
Develop on machine 2
When you are satisfied with your changes, push your codes from machine 2 to Github
On machine 1, do a fetch to check for updates to the code
If there are updates, pull those changes to machine 1
Again, when done making changes, push them from machine 1 to Github
On machine 2 again, fetch and pull changes
Repeat this fetch-pull-push- cycle for all machines
Basically, you'll need to make sure that on wherever machine you are, when you are done, you should always push those changes to the remote (Github). So that other machines, can fetch and pull those changes and continue where you left off.
UPDATE (based on comment):
Once you've got the workflow for your package source codes, next is to package them like any other regular Python package and install them to your site-packages (either directly for your system or preferably in a virtual environment).
I recommend taking a look at the Python docs on Packaging Python Projects which uses setuptools to make your package compatible with pip.
Here's a sample workflow:
git clone <mypackage#github.com> # or git pull if you already cloned it before
cd mypackage
pip install -r requirements.txt
pip install -e . or pip install --user -e .
That last step will install your package to your site-packages folder, like any other pip-compatible package (assuming you've setup your setup.py file properly). If you are using virtual environments, you'll have to activate the virtual env first, then install your package there.
If you are not going to do any modification on the source code, and you just want to install the package on a specific machine, then you can also specify the Github URL to pip:
$ pip install -e git+https://git.repo/some_pkg.git#egg=SomeProject # from git
Lastly, if you are planning to upload this package to PyPi, check out the docs on Uploading the distribution archives. This just adds an extra step to your workflow of uploading your package to PyPi and then doing pip install from there next time.

Github - how to download hdf5 file?

How to download raw file in GitHub?
I am trying to download a concrete (RAW) file.
The GitHub reports size 16.7 MB (see screenshot bellow), when clicked to RAW it only displays text containing few bytes.
Screen
You can use git-lfs to download content from this pointer files.
Install git-lfs, for mac use brew install git-lfs.
Clone the repo.
Run the command git lfs pull.
Reference : git-cloning-giving-pointer-file
Solution for this particular problem can be to download the files from this repository
https://drive.google.com/drive/folders/1cJLPgGfEuFAQzBKbXQtSGXxLXssw1D9f
Anyway, if there is any way how to download HDF5 files from github, it would be very useful.

How to access Xcode project with iCloud

I recently bought a MacBook Pro that I will use to develop an iPhone app. I want to be able to transfer the Xcode project between my Macbook and my iMac in the same manner that Word documents can be transferred using iCloud. Is there a secure way this can be done?
iCloud or version management?
iCloud might sound good idea for syncing Xcode projects, but it actually leads to problems. You should use git instead. I recommend to use bitbucket (online git repos), which is free. You can host private or public projects on bitbucket. I like bitbucket because of free private repos. GitHub does not provide free private repositories!
Easy to share
When you are done editing your code in one machine, you can commit changes and then push your committed changes into a remote repository. When you are open your project on another computer, you have to fetch it (pull) from the remote repository.
By using git, you can share your code easily with other team members, too.
How to
See more here:
Enable Access to Your Source Code Repositories
Save Project Changes
I'm using dropbox to sync my xcode projects across 2 macs. I had no issues so far but I would recommend not to work on a project simultaneously, so make sure to close it on one machine before you open it on another.
Here is how I use iCloud Drive as a remote git repo:
Create a new Xcode Project (with git versioning turned on) in a local directory, for example ~/Xcode-Projects-Local/GiTest
Clone the new local directory to a remote directory, for example iCloud Documents:
git clone --bare --no-hardlinks ~/Xcode-Projects-Local/GiTest ~/Documents/Xcode-Projects/git/GiTest.git
Add the cloned directory as a remote to the local repo:
cd ~/Xcode-Projects-Local/GiTest
git remote add -f iCloud ~/Documents/Xcode-Projects/git/GiTest.git
On a different Mac, clone the remote repo into a new local directory:
git clone ~/Documents/Xcode-Projects/git/GiTest.git ./GiTest
Enjoy!
For an existing project, just skip step 1. Note the --no-hardlinks option to make sure that hard links won't confuse iCloud drive.
For those wondering 'why not just put the project dir directly on the iCloud drive': Xcode always had and -- as of Xcode 10 -- still has problems with that eventually resulting in a corrupted repo.
If you are planning to work on machines with different screen resolutions, for example Macbook and iMac, you should git-ignore directories named project.xcworkspace/xcuserdata.

Version control of uploaded images to file system

After reading Storing Images in DB - Yea or Nay? I think that the file system is the right place for storing images. But I would like to know how you handle backup/version control of uploaded images in your different environments (dev/stage/prod) and for network load balancing?
These problems is pretty easy to handle when working with a database e.g. to make a backup from the production environment and restore the DB in the development environment.
What do you think of using for example git to handle version control of uploaded files e.g?
Production Environment:
A image is uploaded to a shared folder at the web server.
Meta data is stored in the database
The image is automatically added to a git repository
Developer at work:
Checks out the source code.
Runs a script to restore the database.
Runs a script to get the the latest images.
I think the solution above is pretty smooth for the developer, the images will be under version control and the environments can be isolated from each other.
For us, the version control isn't as important as the distribution. Meta data is added via the web admin and the images are dropped on the admin server. Rsync scripts push those out to the cluster that serves prod images. For dev/test, we just rsync from prod master server back to the dev server.
The rsync is great for load balancing and distribution. If you sub in git for the admin/master server, you have a pretty good solution.
If you're OK with backup that preserves file history at the time of backup (as opposed to version control with every revision), then some adaption of this may help:
Automated Snapshot-style backups with rsync.
It can work, but I would store those images in a git repository which would then be a submodule of the git repo with the source code.
That way, a strong relationship exists between the code and and images, even though the images are in their own repo.
Plus, it avoids issues with git gc or git prune being less efficient with large number of binary files: if images are in their own repo, and with few variations for each of them, the maintenance on that repo is fairly light. Whereas the source code repo can evolve much more dynamically, with the usual git maintenance commands in play.