Use SCP command on Windows gitlab runner - command

In my gitlab-ci.yml, i wrote :
deploy:
stage: build
script:
- scp target/ROOT.war user#host/data/temp
But my Windows GitLab-ci runner throws an error :
scp is not recognized as an internal or external command
Do you know if it's possible to add scp package to the runner or something like that ?

scp.exe is packaged with any of the latest distribution of Git for Windows:
vonc#VONCAVN7 C:\Users\vonc
> where scp
D:\prgs\git\latest\usr\bin\scp.exe
If your GitLab agent has in its %PATH% the <installation path of Git>/usr/bin, it will have scp.
The OP GGO details in the comments:
In fact I added the git /usr/bin path in system path var (and not user path var) and recreated my runner instead of using set command.
It works!

Related

How can you solve a "File not found error: No such file or directory" when deploying on gcloud?

Hi I am new to programming and I hope someone can help with this.
A team mate of mine did some changes yesterday through github and now we get this error when we want to "gcloud app deploy on our gcloud": "ERROR: gcloud crashed (FileNotFoundError): [Errno 2] No such file or directory: '/home/name/project/venv/bin/python3.8'."
The app itself still works but it seems we can not deploy anymore as we get this error. Really appreciate you reading this, thanks.
The error (No such file or directory: '/home/name/project/venv/bin/python3.8') suggests that, a virtualenv (venv) was running (perhaps while gcloud was installed) and it is no longer effective (unable to find /home/name/project/venv/bin/python3.8 in the path).
To reactivate the virtualenv, you can:
source /home/name/project/venv/bin/activate
Which should put python3.8 back in your path:
which python3.8
/home/name/project/venv/bin/python3.8
And should return gcloud to a working state for the current shell session.
When that session ends, you'll need to rerun the source ... command.
It's good practice to explicitly deactivate the virtualenv when you're done with it.
Often, when running, the command shell is prefixed with (venv) to indicate that you're in a virtualenv:
# Create a virtualenv in `xxxx`
python3.8 -m venv xxxx
# Activate `xxxx`
me#host:~ $ source xxxx/bin/activate
# Note my prompt is prefixed with `(xxxx)`
(xxxx) me#host:~ $ which python3.8
/home/me/xxx/bin/python3.8
# Within the virtualenv, `python3.8` is ln'd
(xxxx) me#host:~ $ ls -l $(which python3.8)
/home/me/xxx/bin/python3.8 -> /usr/bin/python3.8
# Deactivate `xxxx`
(xxx) me#host:~ $ deactivate
me#host:~ $ which python3.8
/usr/bin/python3.8
(xxxx) me#host:~ $ deactivate
NOTE In the example above, rather than use the customer venv directory, I'm using xxxx to demonstrate the point.

How to load environment variables in a remote AIX machine through ssh while running script from Jenkin pipeline?

We are using some custom modules in our Perl automation framework which runs through Jenkin pipeline. Recently we got package not found error for all custom modules while executing test cases in AIX servers as latest Perl version is installed there . So we tried to add "PERL5LIB" in the path as mentioned in document
https://perlmaven.com/how-to-change-inc-to-find-perl-modules-in-non-standard-locations
We added "export PERL5LIB=/home/foobar/code" in /etc/profile of the AIX server and script getting executed without any issue when running from local AIX machine.
Issue:
But we have Jenkin pipeline to execute the scripts in AIX server using ssh. Now when we do SSH to the AIX server in the pipeline script the variables that we have set in /etc/profile does not load and we get package not found error.
Question: How can I load the profile in the AIX server while running it from pipeline? or is there any other way to handle this. Before executing script I want to export PERL5LIB in remote AIX server through pipeline (only once) and the I should not get package not found error.
Below solutions I have tried :
Load the /etc/profile: ssh AIX server ./etc/profile (using dot since source not working in AIX)
Adding this line "export PERL5LIB=/home/foobar/code" in .ssh/environment in AIX server and set PermitUserEnviorment yes
Appreciate any help on this.
Assign values to variables the usual way:
ssh user#host 'export PERL5LIB=/somepath; echo $PERL5LIB'
user#hosts's password:
/somepath
or
ssh user#host '. /etc/profile.local; echo $PERL5LIB'
user#hosts's password:
/somepath/from/profile
Edit:
If you have to execute multiple commands, create a script and upload it to the target computer, for example:
SCRIPTNAME=/tmp/$$.$RANDOM.script
scp myscript.sh user#host:"$SCRIPTNAME"
ssh user#host "$SCRIPTNAME"
This is solved with below changes.
Step 1: Edit ~/.ssh/environment. Add variable PERL5LIB="/path of the module/"
Step 2: Edit /etc/ssh/sshd_config. Change variable PermitUserEnvironment from no to yes. Uncomment it if commented. This will enable access of environment variables to SSH.
Step 3: Restart SSHD service. (This is imp. I had tried step 1 and 2 before also but not restarted the service so solution was not working)
We can create a script and run it before executing automation test from pipeline.

Windows subsystem for Linux : Command Not Found Error

I have installed windows subsystem for Linux to run Ubuntu 16.04 on my windows 10 home platform.
I have extracted all required directories to run KSQL on this platform.
Now, when I am trying to run any command after navigating to the bin folder. It's throwing command not found error. I tried to add PATH as well but it's not working.
Please suggest.
There's a typo in your command:
export PATH=$PATH:/opt/kafka/confleuent-5.4.0/bin
Instead of confluent-5.4.0 you misspelled it confleuent-5.4.0.
The easiest way to install Confluent CLI, is by making use of the scripted installation:
Install the Confluent CLI using this script. This command creates a
bin directory in your designated location (<path-to-directory>/bin).
The location must be in your PATH (e.g. /usr/local/bin). On Microsoft
Windows, an appropriate Linux environment may need to be installed in
order to have the curl and sh commands available, such as the Windows
Subsystem for Linux
curl -L https://cnfl.io/cli | sh -s -- -b /<path-to-directory>/bin
Finally, if you run confluent start you can get all services up and running, including KSQL (assuming that you have correct configuration files).
You could just use the path
cd bin
./kafka-topics.sh
Also, all those commands work in CMD / PowerShell as well
If you want to run KSQL, I'd suggest just using Docker

Change Conda environment via powershell script (for Gitlab-CI)

I am running some automated Python tests with GitLab-CI on a Windows 10 machine. The GitLab-Runner on the machine used to work with executor = "shell" using the simple Windows shell. This recently stopped working (The docs say support for this shell is depreceated) and the only way to get it work again has been to use the powershell instead with adding shell = "powershell" to our config.toml file. For the tests to run, we need to activate a conda environment. Unfortunately, this seems not to work via the powershell script that GitLab-CI creates for the job.
When I open the powershell manually logged in as the user that is executing the gitlab runner jobs, changing conda environments works. I have run conda init powershell and can change the environment with conda activate myenv. Yet, when I include the following in my gitlab-ci.yml file:
script:
- conda activate myenv
- conda list
the output from conda list confirms that the environment myenv is not activated and instead the base environment is used.
Also trying the absolute path like this
script:
- conda activate C:\Users\myuser\Miniconda3\envs\myenv
- conda list
does not work.
So it seems like I can manually activate the correct conda environment in the powershell, but activating the environment via the powershell script created by GitLab-CI does not work. Is there a fix for this problem? Any help is greatly appreciated.
Looks like gitlab executes each line of the script in a separate subshell. Combine the commands into a single line.
If that doesn't work, most conda commands will accept the name of the environment as parameter -n:
conda list -n myenv
conda install -n myenv PackageName
...
As long as you're just using conda, it shouldn't be necessary to activate the environment.
As it seems to be a problem within powershell but not with cmd, one could use the following in the gitlab-ci yaml:
- cmd '/C' 'conda activate myenv && python myunittests.py'

Executing subprocess.Popen inside Python script in PyDev context is different than running in terminal

I'm executing this code:
p = subprocess.Popen(['/path/to/my/script.sh','--flag'] , stdin=subprocess.PIPE)
p.communicate(input='Y')
p.wait()
It works when executing it on the shell using "python scriptName.py",
BUT when executing using PyDev in Eclipse, it fails, the reason:
/path/to/my/script.sh: line 111: service: command not found
This bash script "script.sh" contains the following command which causes the error:
service mysqld restart
So "service" is not recognized when running the .sh script from the context of PyDev.
I guess it has to do with some ENV VAR configurations, couldn't find how to do it.
BTW - Using "shell=True" when calling subprocess.Popen didn't solve it.
service usually is located in /usr/sbin, and that this directory isn't on the PATH. As this usually contains administrative binaries and scripts which arn't designed to be run by everyone (only by admins/root), the sbin directories arn't always added to the PATH by default.
To check this, try to print PATH in your script (or add an env command).
To fix it, you could either
set the PATH in your python script using os.setenv
pass an env dict containing the correct PATH to Popen
set the PATH in your shellscript
use the full path in your shellscript
set the PATH in eclipse