Error while build image via gitlab cd/cd - Error while fetching server API version - docker-compose

Operation system CentOS Linux release 8.3.2011 x86_64
I try build image when push to master, this is my simple ci/cd config file .gitlab-ci.yml
image:
name: docker/compose:alpine-1.29.2
entrypoint: [""]
stages:
- build
build-job:
stage: build
script:
- echo $(whoami)
- docker -v
- docker-compose --version
- docker-compose build --no-cache
- docker-compose push
- echo "Compile complete."
only:
- master
While pipeline running, i have next error
$ echo $(whoami)
root
$ docker -v
Docker version 19.03.15, build 99e3ed8
$ docker-compose --version
docker-compose version 1.29.2, build 5becea4
$ docker-compose build --no-cache
[21] Failed to execute script docker-compose
Traceback (most recent call last):
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 392, in _make_request
File "http/client.py", line 1277, in request
File "http/client.py", line 1323, in _send_request
File "http/client.py", line 1272, in endheaders
File "http/client.py", line 1032, in _send_output
File "http/client.py", line 972, in send
File "docker/transport/unixconn.py", line 43, in connect
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker-compose", line 3, in <module>
File "compose/cli/main.py", line 81, in main
File "compose/cli/main.py", line 200, in perform_command
File "compose/cli/command.py", line 70, in project_from_options
File "compose/cli/command.py", line 153, in get_project
File "compose/cli/docker_client.py", line 43, in get_client
File "compose/cli/docker_client.py", line 170, in docker_client
File "docker/api/client.py", line 197, in __init__
File "docker/api/client.py", line 222, in _retrieve_server_version
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
I checked permission on sock files
ls -la /var/run/docker.sock
srw-rw-rw- 1 root docker 0 Feb 1 10:05 /var/run/docker.sock
ls -la /run/containerd/containerd.sock
srw-rw-rw- 1 root root 0 Feb 1 10:05 /run/containerd/containerd.sock
Docker service is running on host machine
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2022-02-01 10:05:10 MSK; 27min ago
Docs: https://docs.docker.com
Main PID: 10698 (dockerd)
Tasks: 10
Memory: 430.4M
CGroup: /system.slice/docker.service
└─10698 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
...
Have anybody ideas?

I had a same issue .Here is a solution
sudo service docker start
or you can list images
docker images

Need transfer socket from host machine for each gitlab runner in /etc/gitlab-runner/config.toml (default path)
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]

Related

Replacing disk while retaining osd id

In a ceph cluster, how do we replace failed disks while keeping the osd id(s)?
Here are the steps followed (unsuccessful):
# 1 destroy the failed osd(s)
for i in 38 41 44 47; do ceph osd destroy $i --yes-i-really-mean-it; done
# 2 create the new ones that take the previous osd ids
ceph orch apply osd -i replace.yaml
# Scheduled osd.ceph_osd_ssd update...
The replace.yaml:
service_type: osd
service_id: ceph_osd_ssd # "ceph_osd_hdd" for hdd
placement:
hosts:
- storage01
data_devices:
paths:
- /dev/sdz
- /dev/sdaa
- /dev/sdab
- /dev/sdac
osd_id_claims:
storage01: ['38', '41', '44', '47']
But nothing happens, the osd id still show as destroyed and devices without osd id(s).
# ceph -s
cluster:
id: db2b7dd0-1e3b-11eb-be3b-40a6b721faf4
health: HEALTH_WARN
failed to probe daemons or devices
5 daemons have recently crashed
I also tried to run this
ceph orch daemon add osd storage01:/dev/sdaa
which gives:
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1177, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 141, in handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 318, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 103, in <lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 92, in wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/module.py", line 713, in _daemon_add_osd
raise_if_exception(completion)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 643, in raise_if_exception
raise e
RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new daca7735-179b-4443-acef-412bc39865e3
INFO:cephadm:/bin/podman:stderr Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-daca7735-179b-4443-acef-412bc39865e3 ceph-0a533319-def2-4fbe-82f5-e76f971b7f48
INFO:cephadm:/bin/podman:stderr stderr: Calculated size of logical volume is 0 extents. Needs to be larger.
INFO:cephadm:/bin/podman:stderr --> Was unable to complete a new OSD, will rollback changes
INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.38 --yes-i-really-mean-it
INFO:cephadm:/bin/podman:stderr stderr: purged osd.38
INFO:cephadm:/bin/podman:stderr --> RuntimeError: command returned non-zero exit status: 5
Traceback (most recent call last):
File "<stdin>", line 5204, in <module>
File "<stdin>", line 1116, in _infer_fsid
File "<stdin>", line 1199, in _infer_image
File "<stdin>", line 3322, in command_ceph_volume
File "<stdin>", line 878, in call_throws
RuntimeError: Failed command: /bin/podman run --rm --net=host --ipc=host --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=storage01 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/db2b7dd0-1e3b-11eb-be3b-40a6b721faf4:/var/run/ceph:z -v /var/log/ceph/db2b7dd0-1e3b-11eb-be3b-40a6b721faf4:/var/log/ceph:z -v /var/lib/ceph/db2b7dd0-1e3b-11eb-be3b-40a6b721faf4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmp3vjwl32x:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpclrbifgb:/var/lib/ceph/bootstrap-osd/ceph.keyring:z --entrypoint /usr/sbin/ceph-volume docker.io/ceph/ceph:v15 lvm prepare --bluestore --data /dev/sdaa --no-systemd
zapping the devices also errors:
ceph orch device zap storage01 /dev/sdaa --force
Error EINVAL: Zap failed: INFO:cephadm:/bin/podman:stderr --> Zapping: /dev/sdaa
INFO:cephadm:/bin/podman:stderr --> Zapping lvm member /dev/sdaa. lv_path is /dev/ceph-0a533319-def2-4fbe-82f5-e76f971b7f48/osd-data-9a23996c-6b99-4a46-b539-1dfe2e9358ae
INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-0a533319-def2-4fbe-82f5-e76f971b7f48/osd-data-9a23996c-6b99-4a46-b539-1dfe2e9358ae bs=1M count=10 conv=fsync
INFO:cephadm:/bin/podman:stderr stderr: dd: fsync failed for '/dev/ceph-0a533319-def2-4fbe-82f5-e76f971b7f48/osd-data-9a23996c-6b99-4a46-b539-1dfe2e9358ae': Input/output error
INFO:cephadm:/bin/podman:stderr stderr: 10+0 records in
INFO:cephadm:/bin/podman:stderr 10+0 records out
INFO:cephadm:/bin/podman:stderr 10485760 bytes (10 MB, 10 MiB) copied, 0.00846806 s, 1.2 GB/s
INFO:cephadm:/bin/podman:stderr --> RuntimeError: command returned non-zero exit status: 1
Traceback (most recent call last):
File "<stdin>", line 5203, in <module>
File "<stdin>", line 1115, in _infer_fsid
File "<stdin>", line 1198, in _infer_image
File "<stdin>", line 3321, in command_ceph_volume
File "<stdin>", line 877, in call_throws
RuntimeError: Failed command: /bin/podman run --rm --net=host --ipc=host --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=storage01 -v /var/run/ceph/db2b7dd0-1e3b-11eb-be3b-40a6b721faf4:/var/run/ceph:z -v /var/log/ceph/db2b7dd0-1e3b-11eb-be3b-40a6b721faf4:/var/log/ceph:z -v /var/lib/ceph/db2b7dd0-1e3b-11eb-be3b-40a6b721faf4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm --entrypoint /usr/sbin/ceph-volume docker.io/ceph/ceph:v15 lvm zap --destroy /dev/sdaa
Here is the relevant documentation for this:
https://docs.ceph.com/en/latest/cephadm/drivegroups/#drivegroups
https://docs.ceph.com/en/latest/mgr/orchestrator_modules/#osd-replacement
https://docs.ceph.com/en/latest/mgr/orchestrator/#orchestrator-cli-placement-spec
https://github.com/ceph/ceph/blob/892c51dd3c5f7108e766bea30cd5e0d801b0abd3/src/python-common/ceph/deployment/drive_group.py#L26
https://github.com/ceph/ceph/blob/892c51dd3c5f7108e766bea30cd5e0d801b0abd3/src/python-common/ceph/deployment/drive_group.py#L38
https://github.com/ceph/ceph/blob/892c51dd3c5f7108e766bea30cd5e0d801b0abd3/src/python-common/ceph/deployment/drive_group.py#L161
lvremove /dev/ceph-0a533319-def2-4fbe-82f5-e76f971b7f48/osd-data-9a23996c-6b99-4a46-b539-1dfe2e9358ae -y
vgremove ceph-0a533319-def2-4fbe-82f5-e76f971b7f48
Do this for all, then rerun the zaps:
for i in '/dev/sdz' '/dev/sdaa' '/dev/sdab' '/dev/sdac'; do ceph orch device zap storage01 $i --force; done
and finally
ceph orch apply osd -i replace.yaml

How can I fix Ceph error while ceph-deploy

I am set up a ceph cluster right now and would like to create a cluster.
I've never set up ceph before, but when execute ceph-deploy on user with root rights, not root on / there is no error. After that I read in the manual to set it up in an folder and an user account so I've removed ceph and the keys and started again.
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/util/decorators.py", line 69, in newfunc
[ceph_deploy][ERROR ] return f(*a, **kw)
[ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 147, in _main
[ceph_deploy][ERROR ] fh = logging.FileHandler('ceph-deploy-{cluster}.log'.format(cluster=args.cluster))
[ceph_deploy][ERROR ] File "/usr/lib64/python2.7/logging/__init__.py", line 902, in __init__
[ceph_deploy][ERROR ] StreamHandler.__init__(self, self._open())
[ceph_deploy][ERROR ] File "/usr/lib64/python2.7/logging/__init__.py", line 925, in _open
[ceph_deploy][ERROR ] stream = open(self.baseFilename, self.mode)
[ceph_deploy][ERROR ] IOError: [Errno 13] Permission denied: '/home/myuser/cluster/ceph-deploy-ceph.log'
please try below command
(chown ceph:ceph /home/myuser/cluster)
IOError: [Errno 13] Permission denied: '/home/myuser/cluster/ceph-deploy-ceph.log'
Its Seems you using user named "myuser" and running the command using root rights. So ceph will asume you as a root user. ceph-deploy would create a deploy log file in current directories. Maybe you run this first ceph-deploy command using root rights, then you run the second ceph-deploy command using a "myuser" user. I think thats the problem. You should change permissions on ~/cluster/ceph-deploy.log file to be write/read on "myuser" user.
Change your directory permissions to "myuser" with sudo chown -R myuser:myuser /home/myuser/cluster
And if you want to restart deploying cluster, please remove all file in yout ~/cluster directories. Then purge ceph packages and purge all data in /var/lib/ceph/ using ceph-deploy purge <node> and ceph-deploy purgedata <node>. Also using ceph-deploy forgetkeys command to remove keys.
As root on the server:
mkdir /home/myuser/cluster/
chown myser. -R /home/myuser/cluster/
And run ceph-deploy again as myuser
If you do not mind, you can also just remove the log and try running the command again.
sudo rm /home/myuser/cluster/ceph-deploy-ceph.log

OWASP/ZAP dangling when trying to scan

I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html

jupyterlab/hub-extension#0.12.0" is not compatible with the current JupyterLab

I want to install the littlest jupyterhub via .gitlab-ci.yml in a subdomain.
Here is my .gitlab-ci.yml. I get this error message. I need a little help to fix this error message. I know the littlest jupyterhub is still in beta.
Is it possible to add packages like pandas, mathplotlib to install with a shell script?
$ curl https://raw.githubusercontent.com/jupyterhub/the-littlest-jupyterhub/master/bootstrap/bootstrap.py | sudo python3 - --admin klein
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 2682 100 2682 0 0 24381 0 --:--:-- --:--:-- --:--:-- 24381
Checking if TLJH is already installed...
Setting up hub environment
Installed python & virtual environment
Set up hub virtual environment
Setting up TLJH installer...
Setup tljh package
Starting TLJH installer...
> /usr/bin/npm pack #jupyterlab/hub-extension
Incompatible extension:
"#jupyterlab/hub-extension#0.12.0" is not compatible with the current JupyterLab
Conflicting Dependencies:
JupyterLab Extension Package
>=0.18.3 <0.19.0 >=0.19.1 <0.20.0 #jupyterlab/application
>=0.18.3 <0.19.0 >=0.19.1 <0.20.0 #jupyterlab/apputils
Found compatible version: 0.11.0
> /usr/bin/npm pack #jupyterlab/hub-extension#0.11.0
> node /opt/tljh/user/lib/python3.6/site-packages/jupyterlab/staging/yarn.js install
> node /opt/tljh/user/lib/python3.6/site-packages/jupyterlab/staging/yarn.js run build
System has not been booted with systemd as init system (PID 1). Can't operate.
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/tljh/hub/lib/python3.6/site-packages/tljh/installer.py", line 447, in <module>
main()
File "/opt/tljh/hub/lib/python3.6/site-packages/tljh/installer.py", line 436, in main
ensure_jupyterhub_service(HUB_ENV_PREFIX)
File "/opt/tljh/hub/lib/python3.6/site-packages/tljh/installer.py", line 141, in ensure_jupyterhub_service
systemd.reload_daemon()
File "/opt/tljh/hub/lib/python3.6/site-packages/tljh/systemd.py", line 19, in reload_daemon
], check=True)
File "/usr/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['systemctl', 'daemon-reload']' returned non-zero exit status 1.
Downloading traefik 1.6.5...
ERROR: Job failed: exit code 1
image: ubuntu:latest
before_script:
- apt-get update
- apt-get install -y curl git ssh python3 rsync sudo
- git submodule update --init --recursive
stages:
- test
- deploy
test_job:
stage: test
script:
- curl
https://raw.githubusercontent.com/jupyterhub/the-littlest-jupyterhub/master/bootstrap/bootstrap.py | sudo python3 - --admin admin
deploy:
stage: deploy
script:
- curl
https://raw.githubusercontent.com/jupyterhub/the-littlest jupyterhub/master/bootstrap/bootstrap.py | sudo python3 - --admin adminn
- mkdir .public
- cp -r * .public
- mv .public public
- mkdir "${HOME}/.ssh"
- echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"
- echo "${SSH_PRIVATE_KEY}" > "${HOME}/.ssh/id_rsa"
- chmod 700 "${HOME}/.ssh/id_rsa"
- rsync -hrvz --delete --exclude=_ public/ user#example.com:www/jupyter/
artifacts:
paths:
- public
only:
- master

ckan harvester: "No module named pika" error

On a ckan instance running ok, I installed the harvester extension following this guide: https://github.com/ckan/ckanext-harvest
these are the steps I followed:
. /usr/lib/ckan/default/bin/activate
cd /usr/lib/ckan/default/src/ckan
sudo pip install -e git+https://github.com/okfn/ckanext-harvest.git#stable#egg=ckanext-harvest
cd /usr/lib/ckan/default/src/ckan/src/ckanext-harvest
sudo pip install -r pip-requirements.txt
This is the content of pip-requirements.txt:
pika==0.9.8
redis==2.10.1
I continue configuring the plugin, everything seems to work ok. I have it running at http://localhost/harvest. Then I create a new source, and when I want to start the gather command I get this error:
$ . /usr/lib/ckan/default/bin/activate
$ cd /usr/lib/ckan/default/src/ckan/src/ckanext-harvest
$ paster --plugin=ckanext-harvest harvester gather_consumer --config=/etc/ckan/default/production.ini
Traceback (most recent call last):
File "/usr/lib/ckan/default/bin/paster", line 9, in <module>
load_entry_point('PasteScript==1.7.5', 'console_scripts', 'paster')()
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 104, in run
invoke(command, command_name, options, args[1:])
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 143, in invoke
exit_code = runner.run(args)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 238, in run
result = self.command()
File "/usr/lib/ckan/default/src/ckan/src/ckanext-harvest/ckanext/harvest/commands/harvester.py", line 125, in command
from ckanext.harvest.queue import get_gather_consumer, gather_callback
File "/usr/lib/ckan/default/src/ckan/src/ckanext-harvest/ckanext/harvest/queue.py", line 5, in <module>
import pika
ImportError: No module named pika
I'm pretty sure there must be something really silly with the virtualenv (python newbie here)
This is because you used sudo pip. Because of how python's virtualenv works, if you want to use sudo to install into the virtualenv, you need to give full path to pip. Something like this would work
sudo /usr/lib/ckan/default/bin/pip -e git+https://github.com/okfn/ckanext-harvest.git#stable#egg=ckanext-harvest
cd /usr/lib/ckan/default/src/ckanext-harvest
sudo /usr/lib/ckan/default/bin/pip install -r pip-requirements.txt