I want to be able to download data from BaseSpace in fastq-format. I know that you can download data through the browser, but I would like to do this using the Linux-command line.
I'm already looking into creating an API, but I don't have any experience with that whatsoever...
Is there a simple way to achieve this?
There is basemount --a tool developed by Illumina-- that streamline what you want to do. With basemount I can mount my BaseSpace Sequence Hub within my linux machine.
For instance, when I run:
mkdir BaseSpace
basemount BaseSpace
I get a folder (BaseSpace) with the same directory structure of the one shown in the website of my BaseSpace account. Then, to copy files from BaseSpace to my local machine, I just do:
cp BaseSpace/path/to/file/fileName /path/in/local/machine/
Please, look at the documentation for how to install basemount and log-in into your BaseSpace account.
I'm doing this same thing using the API, but the script here seems to be a user-friendly wrapper for the same thing:
https://github.com/nh13/basespace-invaders
It requires credentials that you receive when you choose to "Create New Application" on https://developer.basespace.illumina.com/; this is not at all difficult and the README page for the above github has some instructions.
Related
I'm trying to follow some of Tensorflow's tutorials. Specifically this one:
https://www.tensorflow.org/lite/tutorials/model_maker_object_detection
It has a step that uses files in Google cloud storage. That step fails. It seems I should be able to manually download that file(s). But I can't click on the URL. I've poked around in Google Cloud storage, but it appears it costs money and I can't seem to figure out how to "copy" that file(s) from someone else's cloud storage to mine.
Am I missing something simple?
You can use gsutil command line tool
For example
gsutil cp gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv /tmp/
It may ask you to login first - just follow the commands that shows on screen.
For more details see https://cloud.google.com/storage/docs/downloading-objects#gsutil
For that example it does need login details. You can open the notebook on Google colabs and download as in the above:
! gsutil cp gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv /content/
I am trying to download a large dataset from my Organization Box account into a remote computing cluster using wget (or even curl). How do I go about doing this? Box's help section is very unclear.
I run some control-m jobs which generate files and places in UNIX box under various folders.
These files need to be sent to different users who don't have access to the system.
Each time, I have to copy these files from the Unix folder (based on which control-m job was run) to my local directory and then send those to the users.
I am looking for a way to automate this. I want to create an interface where users can specify parameters (Job names), which in turn will copy the file from the particular folder on Unix to a location user has to access to.
The way I think I might have to approach this problem is -
Share a directory on any Windows virtual machine which everyone has access to. (This will be my landing zone)
Create a script which transfers files from various folders on Unix to Windows directory, based on the parameters that are being passed.
Create an HTA interface where users can specify parameters, which in turn will trigger the script and transfer the file, user is looking for, to windows directory
I am not a programmer but I would like to develop something which will make everyone's life easier.
Could someone please advise if this approach is correct or if this can be achieved in a better way.
Moreover, which language will be a good choice to write this script in. I know a bit of shell scripting and PowerShell. Willing to learn anything else if that solves my problem.
Please advise.
Here is one solution:
Obtain empty Windows server
Install chocolatey
cinst winscp (to copy files)
Use https://github.com/tomohulk/WinSCP to automate file copy via posh script. Provide adequate parameters for it.
cinst rundeck --params /Service to provide graphical interface for users in web
Manually create rundeck job and expose parameters for users so they have nice web GUI. You can let users specify folder or let them choose from the list.
It is a habit that I have for editing files online . As far as I have many working websites and I don't want to backup all the files located on them but only those that I have edited through FTP client software .
What is the best way to have a version tracker for files ? Something like Github
I am not cool with editing files (websites) on localhost and move them to online mode. I am looking for a way to synchronize both local and web files in order to have the latest version of special files.
What about trying something like WinSCP or setting up XAMPP and working locally pushing to bitbuckket or github then once done working uploading all the files through FTP. WinSCP is for windows and allows you to edit the files without having to download them, edit them, reupload them. It allows you to edit them while they are live. However, XAMPP way is a better way to go if you plan to work on other peoples websites at any point in time.
Im trying to introduce IPython notebook in my work. One of the ways I want to do that is by sharing my own work as notebooks with my colleagues so they would be able to see how easy it is to create sophisticated reports and share them.
I obviously can't use Notebook viewer since most of our work is confidential. I'm trying to set up notebook viewer locally. I read this question and followed the instructions there, but now that nbconvert is part of IPython the instructions are no longer valid.
Can anybody help with that?
You have a couple of options:
As described above convert to HTML and then serve them using a Simple server e.g python -m "SimpleHTTPServer" You can even set up a little python script that would "listen" in one directory. If changes or new notebooks is added to the directory the script will run nbconvert and move the HTML file to the folder you are serving from. To navigate to the server you are running go to yourip:port e.g. 10.0.0.2:8888 (see the IPython output when you run the IPython notebook command) (If you can serve over the network you might just as wel look into point 2 below)
If your computers are networked you can serve your work over the lan by sharing your IP address and port with your colleagues. This will however give them editing access but should not be a problem? This means that they will navigate to your ipython server and see the ipython notebook and be able to run your files.
Host your notebooks on an online server like Linode etc... entry level servers cheap. Some work is needed to add a password though.
Convert to PDF and mail it to them.
Convert to a slideshow (now possible in Version 1.00) and serve via option 1,2 or just share the HTML file with them.
Let them all run ipython notebook and check your files into a private repo at bitbucket (its free private git repo). They can then get your files there and run it themselves on their own machines.Or just mail it to them. Better yet if they wont make changes share a dropbox folder with everyone. If they run ipython notebook in that folder they will see your files (DANGEROUS though)
Get them in a boardroom and show them. :)