Dockerized emacs not loading extensions - emacs

I'm experimenting with Dockerizing some GUI applications after being inspired by an excellent blog post on the topic. I'm working on getting spacemacs up and running, because spacemacs is awesome. But for some reason Docker doesn't seem to be cloning the spacemacs repo as expected.
My Dockerfile is:
# Run spacemacs in a contianer
#
# sudo docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY garry-cairns/spacemacs
#
FROM ubuntu:14.04
MAINTAINER Garry Cairns
ENV REFRESHED_AT 2015-06-29
# get base components
RUN ["apt-get", "-y", "install", "emacs", "git"]
# move into our working directory
# ADD must be after chown see http://stackoverflow.com/a/26145444/1281947
RUN ["groupadd", "spacemacs"]
RUN ["useradd", "spacemacs", "-s", "/bin/bash", "-m", "-g", "spacemacs", "-G", "spacemacs"]
ENV HOME /home/spacemacs
WORKDIR /home/spacemacs
RUN ["chown", "-R", "spacemacs:spacemacs", "/home/spacemacs"]
# install emacs and spacemacs extensions
RUN ["git", "clone", "--recursive", "https://github.com/syl20bnr/spacemacs", "~/.emacs.d"]
# add local setup
ADD ./spacemacs .spacemacs
USER spacemacs:spacemacs
ENTRYPOINT ["emacs"]
As it says in the comment at the top of the file I run that with:
sudo docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY --name spacemacs garry-cairns/spacemacs
That brings up a stock emacs on my display. If I look for files I can see a .emacs.d directory in ~/emacs but there's nothing in it hence no spacemacs extensions.
I've also tried just ADDing my own .emacs.d right where I add the .spacemacs file. Interestingly that brings up the spacemacs start screen I'd expect, with my preferred theme, but evil-mode isn't running. I expect other modes probably aren't running either but they're much harder to check than evil mode.
Can anyone suggest what more I could do to get this working?

Related

write "message of the day" by images used by vscode remote development

I am using remote development extension in vscode with a Docker image and I would like that when I start it in the console I want to see the message of the day "motd".
The Dockerfile in .devcontaier has this at the end:
COPY motd /etc/
... # change the default user and WORKDIR
CMD cat /etc/motd && /bin/bash
If I run manually this image I see the message but when vscode uses it I don't see it in the console.
The best solution I found so far is
RUN echo "cat /etc/motd" >> $HOME/.bashrc
CMD /bin/bash

Run Kitura Docker Image causes libmysqlclient.so.18 Error

after i had some previous problem to Dockerise my MySQL Kitura SETUP here : Docker Build Kitura Sqift Container - Shim.h mysql.h file not found
I am running in a new Problem i can not solve following the Guide from : https://www.kitura.io/docs/deploying/docker.html .
After i followed all the steps and also did the fixing on the MySQL issue previously i was now able to run the following command :
docker run -p 8080:8080 -it myapp-run
THis however leads to the following issue :
error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory
i assume something tries again to open the libmysqclclient from some wrong Environmental Directories ?
But how can i fix this issues by building the docker images ... is there any way and better a smart way ?
Thanks a lot again for the help.
I was able to update and enhance my dockerfile this is now running smoothly and also can be used for CI and CD tasks.
FROM ibmcom/swift-ubuntu-runtime:latest
##FROM ibmcom/swift-ubuntu-runtime:5.0.1
LABEL maintainer="IBM Swift Engineering at IBM Cloud"
LABEL Description="Template Dockerfile that extends the ibmcom/swift-ubuntu-runtime image."
# We can replace this port with what the user wants
EXPOSE 8080
# Default user if not provided
ARG bx_dev_user=root
ARG bx_dev_userid=1000
# Install system level packages
RUN apt-get update && apt-get dist-upgrade -y
RUN apt-get update && apt-get install -y sudo libmysqlclient-dev
# Add utils files
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/run-utils.sh /swift-utils/run-utils.sh
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/common-utils.sh /swift-utils/common-utils.sh
RUN chmod -R 555 /swift-utils
# Create user if not root
RUN if [ $bx_dev_user != "root" ]; then useradd -ms /bin/bash -u $bx_dev_userid $bx_dev_user; fi
# Bundle application source & binaries
COPY ./.build /swift-project/.build
# Command to start Swift application
CMD [ "sh", "-c", "cd /swift-project && .build/release/Beautylivery_Server_New" ]

How can the terminal in Jupyter automatically run bash instead of sh

I love the terminal feature and works very well for our use case where I would like students to do some work directly from a terminal so they experience that environment. The shell that launches automatically is sh and does not pick up all of my bash defaults. I can type "bash" and everything works perfectly. How can I make "bash" the default?
Jupyter uses the environment variable $SHELL to decide which shell to launch. If you are running jupyter using init then this will be set to dash on Ubuntu systems. My solution is to export SHELL=/bin/bash in the script that launches jupyter.
I have tried the ultimate way of switching system-wide SHELL environment variable by adding the following line to the file /etc/environment:
SHELL=/bin/bash
This works on Ubuntu environment. Every now and then, the SHELL variable always points to /bin/bash instead of /bin/sh in Terminal after a reboot.
Also, setting up CRON job to launch jupyter notebook at system startup triggered the same issue on jupyter notebook's Terminal.
It turns out that I need to include variable setting and sourcing statements for Bash init file like ~/.bashrc in CRON job statement as follows via the command $ crontab -e :
#reboot source /home/USERNAME/.bashrc && \
export SHELL=/bin/bash && \
/SOMEWHERE/jupyter notebook --port=8888
In this way, I can log in the Ubuntu server via a remote web browser (http://server-ip-address:8888/) with opening jupyter notebook's Terminal default to Bash as same as local environment.
You can add this to your jupyter_notebook_config.py
c.NotebookApp.terminado_settings = {'shell_command': ['/bin/bash']}
With Jupyter running on Ubuntu 15.10, the Jupyter shell will default into /bin/sh which is a symlink to /bin/dash.
rm /bin/sh
ln -s /bin/bash /bin/sh
That fix got Jupyter terminal booting into bash for me.

Symlinking unicorn_init.sh into /etc/init.d doesn't show with chkconfig --list

I'm symlinking my config/unicorn_init.sh to /etc/init.d/unicorn_project with:
sudo ln -nfs config/unicorn_init.sh /etc/init.d/unicorn_<project>
Afterwards, when I run chkconfig --list my unicorn_ script doesn't show. I'm adding my unicorn script to load my application on server load.
Obviously, this is not allowing me to add my script with:
chkconfig unicorn_<project> on
Any help / advice would be awesome :).
Edit:
Also, when I'm in /etc/init.d/ and run:
sudo service unicorn_project start
It says: "unrecognized service"
I figured this out. There were two things wrong with what I was doing:
1) You have to make sure your unicorn script can play nice with chkconfig by adding the below code below #!/bin/bash. Props to digitalocean's blog for the help.
# chkconfig: 2345 95 20
# description: Controls Unicorn sinatra server
# processname: unicorn
2) I was attempting to symlink the config/unicorn_init.sh file when I was already in the project directory which was creating a dangling symlink (pink colored symlink ~> should be teal) by using a relative path. To fix this, I removed the dangling symlink and provided the absolute path to the unicorn_init.sh file.
To debug this I used ll in the /etc/init.d/ directory to see r,w,x permissions and file types, was running chkconfig --list to see a list of services in /etc/init.d/ and also was trying to run the dangling symlink in my /etc/init.d directory with sudo service unicorn_<project> restart
Hope this helps someone.

How to use virtualenv + virtualenvwrapper properly with Vagrant?

I found that the most convenient way of installing virtualenv + virtualenvwrapper is by using virtualenvburrito.
Now I can manage to automate my pip installs in a vagrant provision by the following:
Line in Vagrantfile:
config.vm.provision :shell, :path => "bootstrap.sh"
Lines in bootstrap.sh:
curl -s https://raw.github.com/brainsik/virtualenv-burrito/master/virtualenv-burrito.sh | $SHELL
source /root/.venvburrito/startup.sh
cd /vagrant
mkvirtualenv my_project
pip install -r requirements.txt
Then I run vagrant ssh but then I have to run the following to access my virtual environment:
sudo -i
source /root/.venvburrito/startup.sh
workon my_project
I don't want to always have to run sudo -i and source /root/.venvburrito/startup.sh, I just want to be able to run workon my_project directly.
But
(I.) I can't seem to append source /root/.venvburrito/startup.sh to my ~/.profile and
(II.) even if it was appended to that file I'd get a permissionerror. I can't seem to change the permissions for any protected file either.
The best way to deal with (I.) and (II.) is to set the privileged attribute in the Vagrantfile to false.
See here