are there ways to use VS code plugins in google cloud shell? - visual-studio-code

I have a few quick navigation plugins such as "block travel" I use all the time. Is there a way to use these in cloud shell?
I imagine there are some restrictions, but even some simple editor plugins can be huge timesavers.
While I'm at it - alt-D to duplicate a line, or transpose lines - some of those seem to be missing and hard to use key remapping to get working, at least within the shell. In general maybe keyboard shortcuts seem to get trapped by the browser or PWA wrapper. I'm using cloudshell as a webapp on a chromebook FWIW, for various secure projects.

I have come up with a solution that covers both aspects of your question
To get Unlimited Persistent Disk:
You can use Google Cloud Storage FUSE
Google Cloud Storage FUSE lets you mount a GCS bucket as a folder to your linux instance. By doing that you get an “unlimited '' persistent disk and it is super simple to set up since gcsfuse is already installed in cloud shell.
1. Create a GCS bucket (you only need to run this once) -- replace BUCKET_NAME with any name:
gsutil mb "gs://BUCKET_NAME/"
2. Create a local directory for mounting -- replace FOLDER_NAME with the chosen directory name:
mkdir /home/[USER]/[FOLDER_NAME]
chmod 777 /home/[USER]/[FOLDER_NAME]
3. Mount the bucket onto the local filesystem (note: you need to re-run this every time Cloud Shell starts)
gcsfuse -o nonempty -file-mode=777 -dir-mode=777 --uid=1000 --debug_gcs [BUCKET_NAME] /home/[USER]/[FOLDER_NAME]
To use third party plugins in cloud shell:
You can use an environment customization script (.customization_environment) as mentioned in the public documentation. It allows you to install additional packages into your Cloud Shell environment when it starts.
For reference, below are the steps to install VS Code plug-in.
Step 1:
To install the VSCode server, run the script named visual_studio_code.sh as below, in the root directory workspace of Cloud Shell Editor.
visual_studio_code.sh file:
export VERSION=`curl -s https://api.github.com/repos/cdr/code-server/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")'`
wget https://github.com/cdr/code-server/releases/download/$VERSION/code-server-3.10.2-linux-amd64.tar.gz
tar -xvzf code-server-3.10.2-linux-amd64.tar.gz
cd code-server-3.10.2-linux-amd64
Run the script using the below command in shell,
./visual_studio_code.sh
if getting permission denied error then run this following command in shell,
chmod +x visual_studio_code.sh
./visual_studio_code.sh
Step 2:
Make a customization script in the root directory workspace in Cloud Shell Editor to start VS Code Server on boot with the below commands :
.customization_environment file :
#!/bin/sh
#.customize_environmnet run in background as root, wait for your user to initialize
sleep 20
sudo -u [USER] /home/[USER]/code-server-3.10.2-linux-amd64/code-server --auth none --port 9090
Step 3:
To view Visual Studio Code server on port 9090 :
Click on Web Preview > Change Port > 9090
If getting a 404 error, remove ‘?authuser=0’ from the url.
Visual Studio Code Server will now be running on the browser!!!
Block travel navigation plugin:
To have the block travel navigation plugin in cloud shell,follow the below commands and run them in shell in root directory:
wget https://github.com/efatsi/block-travel/archive/refs/tags/v1.0.0.tar.gz
tar xzvf v1.0.0.tar.gz
ls
#You will see block-travel-1.0.0
block-travel-1.0.0/keymaps/block-travel.cson --auth none --port 9090
#You might get Permission denied if yes, then follow the next two commands else go to webport view in 9090
chmod +x block-travel-1.0.0/keymaps/block-travel.cson
block-travel-1.0.0/keymaps/block-travel.cson --auth none --port 9090
Open the webport view in 9090, you will be able to navigate through the vs code files using :
Alt+up for block-travel.jumpUp
Alt+shift+up for block-travel.selectUp
Alt+down for block-travel.jumpDown
Alt+shift+down for block-travel.selectDown

WARNING: This should not be considered a long term solution, just a stop gap until this is supported in an easier fashion.
This might not be the greatest idea but it does seem to work for the vim extension I tried in my environment. Probably best to make a request through the in product feedback to get it officially added but until then you can follow these steps.
Upload the .vsix package to your $HOME directory.
Unzip the package into the /google/devshell/editor/theia/plugins directory. This action will not persist so you'll want to add the command to the .customize_environment script actions.
e.g.
sudo unzip vscodevim.vsix -d /google/devshell/editor/theia/plugins/vscode-vim
Now for the questionable part. You'll want to install the pslist package to make life easy so you have access to the rkill command. You probably also want to add this to the .customize_environment file as well since it also will not persist.
sudo apt install pslist
Now we need to get the process id for the editor. Currently this seems to be spawned by a supervisord command, which also spawns the tmux section so we're going to grab the process id that is from the runuser command it spawns (and filter for the theia one just in case).
ps ax | grep runuser | grep "theia start"
Then we can use rkill to kill the process and all of the its children, which will cause supervisord to restart it for us and the plugin should be available.
sudo rkill PID_OF_GREP_OUTPUT
I'm not sure the best way to script the rkill command yet, since I don't know the timing of when it's up vs the .customize_environment execution, so right now I run this each time I start up a new VM.
If anything goes horribly wrong, you should be able to request a restart of the VM from the menu options and get a fresh one.

Cloud Shell offers VS Code editor experience through Theia. Did you try cloud Code editor in the cloud shell? that is exposed through "Open Editor" button on the top right, this will open cloud code editor that gives you VSCode experience. You have all the navigation keys that are available in the editor.

Related

docker-composer run multiple commands

As other answers have noted, I'm trying to run multiple commands in docker-compose, but my container exits without any errors being logged. I've tried numerous variations of this in my docker-compose file:
command: echo "the_string_for_the_file" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend
The Dockerfile command is:
CMD ["datahub-frontend/bin/datahub-frontend"]
My Real Goal
Before the application starts, I need to create a file named user.props in a location ./datahub-frontend/conf/ and add some text to that file.
Annoying Constraints
I cannot edit the Dockerfile
I cannot use a volume + some init file to do my bidding
Why? DataHub is an open source project for documenting data. I'm trying to create a very easy way for non-developers to get an instance of DataHub hosted in the cloud. The hosting we're using (AWS Elastic Beanstalk) is cool in that it will accept a docker-compose file to create a web application, but it cannot take other files (e.g. an init script). Even if it could, I want to make it really simple for folks to spin up the container: just a single docker-compose file.
Reference:
The container image is located here:
https://registry.hub.docker.com/layers/datahub-frontend-react/linkedin/datahub-frontend-react/465e2c6/images/sha256-e043edfab9526b6e7c73628fb240ca13b170fabc85ad62d9d29d800400ba9fa5?context=explore
Thanks!
You can use bash -c if your docker image has bash
Something like this should work:
command: bash -c "echo \"the_string_for_the_file\" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend"

Autostart deluge daemon 1.3.10 on boot on Raspberry Pi

I have already been through different tutorials on how to set raspberry pi into a torrent box, but I think most of the how-to tutorials are out-dated.
I have also check my version of deluge daemon using his command:
deluge -v
And it returns this:
deluged: 1.3.10
libtorrent: 0.16.18.0
I have followed the How-To Geek tutorial so far.
Link: http://www.howtogeek.com/142044/how-to-turn-a-raspberry-pi-into-an-always-on-bittorrent-box/
After I stared to get errors I have fully uninstalled and deleted all the files of deluge.
The tutorial suggests this command:
sudo wget -O /etc/default/deluge-daemon http://cdn5.howtogeek.com/wp-content/uploads/gg/up/sshot5151a8c86fb85.txt
But there is no such file as /etc/default/deluge-daemon, instead there is a deluged named file (maybe short for deluge-daemon in new version)
Basically what the command does is that it copies the content of the file http://cdn5.howtogeek.com/wp-content/uploads/gg/up/sshot5151a8c86fb85.txt to the file located at /etc/default/deluge-daemon.
As I can't find deluged-daemon, I chose to do this with /etc/default/deluged
The original content of /etc/default/deluged:
# Defaults for deluged initscript
# sourced by /etc/init.d/deluged
# change to 1 to enable daemon
ENABLE_DELUGED=0
Content provided on the file http://cdn5.howtogeek.com/wp-content/uploads/gg/up/sshot5151a8c86fb85.txt:
# Configuration for /etc/init.d/deluge-daemon
# The init.d script will only run if this variable non-empty.
DELUGED_USER="pi" # !!!CHANGE THIS!!!!
# Should we run at startup?
RUN_AT_STARTUP="YES"
But both files looks different and the deluge daemon doesn't load up on startup.
I managed to resolve this issue using this guide: http://dev.deluge-torrent.org/wiki/UserGuide/Service/systemd.
Follow instructions from this guide (you may skip deluge-web instructions).
Note that deluge user was created with --home /var/lib/deluge.
Update auth (setup an account) and core.conf (set allow_remote flag) files in home dir of deluge user (as opposed to home dir of pi user that is usually mentioned in other tutorials).
Reboot

How to run mongodb on AWS

I'm looking for a little direction on how to set up services on AWS. I have an application that is build with Node.js and uses mongodb (and mongoose as the ODM). I'm porting everything over to AWS and would like to set up an autoscaling group behind a load balancer. What I am not really understanding, however, is where my mongodb instance should live. I know that using DynamoDB it can be fairly intuitive to set up to work with that, but since I am not, my question is this: Where and how should mongo be set up to work with my app? Should it be on the same ec2 instance with my app, and if so, how does that work with new instances starting and being terminated? Should I set up an instance dedicated only for mongo? In addition, to that question, how do I create snapshots and backups of my data?
This is a good document for installing MongoDB on EC2, and managing backups: https://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
If you aren't comfortable doing all this yourself you might want to also look into MongoLab which is a MongoDB as a Service that can run on AWS.
Your database should definitely be in a separate instance than your app, from all aspects.
A very basic tier based application should comprise of the app server cluster in a scaling group behind a load balancer - in a public subnet, and a separate cluster (recommended in a different subnet which is not publicly accessible), which your app cluster will speak to. whether to use an ELB for Mongo or not actually depends on your mongo config (replica set).
In regards to snapshots (assume this will only be relevant for your DB), have a look at this.
You can easily install MongoDB in AWS Cloud 9 by using the below process
First create Cloud 9 environment in AWS then at the terminal
ubuntu:~/environment $ At the terminal you’ll see this.
Enter touch mongodb-org-3.6.repo into the terminal
Now open the mongodb-org-3.6.repo file in your code editor (select it from the left-hand file menu) and paste the following into it then save the file:
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
* Now run the following in your terminal:
sudo mv mongodb-org-3.6.repo /etc/yum.repos.d
sudo yum install -y mongodb-org
If the second code does not work try:
sudo apt install mongodb-clients
Close the mongodb-org-3.6.repo file and press Close tab when prompted
Change directories back into root ~ by entering cd into the terminal then enter the following commands:
“ubuntu:~ $ “ - Terminal should look like this.
sudo mkdir -p /data/db
echo 'mongod --dbpath=data --nojournal' > mongod
chmod a+x mongod
Now test mongod with ./mongod
Remember, you must first enter cd to change directories into root ~ before running ./mongod
Don't forget to shut down ./mongod with ctrl + c each time you're done working
-if this error pops up while using command mongod
exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
Then use the code:
sudo chmod -R go+w /data/db
Reference

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.

Can't create working meteor.js project on a vagrant box

I cannot start up a new Meteor application on a Vagrant linux box (running on a Mac). It fails every time with a 'unspecified uncaught exception' in Mongo. I have tried a bunch of things to get this going, but even with the simplest set-up, I cannot get the project running. I would be grateful for any suggestions.
My steps are:
create a completely clean Vagrant box ("ubuntu/trusty64");
install Meteor on the new box (curl https://install.meteor.com/ | sh);
choose a location to create the project;
create a new Meteor project (meteor create app);
start up the project (cd app; meteor)
I know that the permissions on the vagrant shared folder are quirky, so for step #3 above I have tried putting the project:
in the shared guest/host folder, /vagrant,
in a subdirectory of the Vagrant home folder (/home/vagrant),
in a subdirectory of / (with permissions set to vagrant:vagrant), and
in a subdirectory of / with permissions set to root:root, the project created with sudo meteor create app and run with sudo meteor
In all cases, I see this error:
=> Started proxy.
Unexpected mongo exit code 100. Restarting.
Unexpected mongo exit code 100. Restarting.
Unexpected mongo exit code 100. Restarting.
Can't start Mongo server.
MongoDB had an unspecified uncaught exception.
This can be caused by MongoDB being unable to write to a local database.
Check that you have permissions to write to .meteor/local. MongoDB does
not support filesystems like NFS that do not allow file locking.
I cannot tell if this is a Vagrant issue (though I think not, given what I've tried) or a Meteor issue, but I suspect it is Meteor (or one of its many dependencies). I doubt it is a permissions issue, since it failed when running as root. I've tried building meteor from scratch and the build fails and I've tried creating the project with --release 0.9.0 and --release 0.9.2-rc1 and the download is simply killed without explanation.
(1) After step 2 'install Meteor on the new box (curl https://install.meteor.com/ | sh)'
user$ cd /vagrant
user:/vagrant$ meteor create myApp
You should see the myApp folder on your Mac host (the same folder for the vagrantfile)
(2) Insides the myApp folder, you will see the default .meteor folder, make a folder called local if it is no there
user:/vagrant$ cd myApp/.meteor
user:/vagrant/myApp/.meteor$ mkdir local
(3) Create the same folder structure in the /home/vagrant
user:/vagrant/myApp/.meteor$ cd ~
~$mkdir -p myApp/.meteor/local
(4) Link or mount the /vagrant/myApp/.meteor/local to /home/vagrant/myApp/.meteor/local
sudo mount --bind /home/vagrant/myApp/.meteor/local/ /vagrant/myApp/.meteor/local/
or make it permanently
echo “sudo mount --bind /home/vagrant/myApp/.meteor/local/ /vagrant/myApp/.meteor/local/” >> ~/.bashrc && source ~/.bashrc
(5) Now you can start the meteor
~$cd /vagrant/myApp
user:/vagrant/myApp$meteor
The reason why I mount the local folder rather than the <.meteor> folder is that you can still edit the files insides the <.meteor> folder on your Mac host. You can replace myApp with whatever name you want
Hope this help
I'm working with a Windows host, but maybe this will apply to your situation as well.
The only folder which causes the issue is ./meteor/local. If you relocate this with a symlink to be outside of the shared /vagrant folder you should be able to run the meteor app okay.
But, to put a symlink in the shared folder you need to enable symlinks in the VM... which requires starting Vagrant as an admin.
I put together an Vagrantfile with some scripts and instructions here:
https://github.com/ElectronVector/vagrant-meteor
I ran into similar issues trying to run meteor on windows. It seems that mongodb is not able to write in the /vagrant folder. I solved this by doing
sudo mount --bind /home/vagrant/meteorapp/.meteor/ /vagrant/meteorapp/.meteor/
(got that from https://gist.github.com/gabrielhpugliese/5855677)
Here is an answer that solved my problem. Launching meteor project from a shared folder on Debian VMware virtual machine(running on a Windows).
The issue is that mongodb can't create data files inside a shared folder, so in this case just use an existing mongodb for meteor project:
export MONGO_URL=mongodb://localhost:27017/your_db
Doing
vagrant reload --provision
solved my problem.
I think the reason might be some files got corrupted or deleted.