I have installed windows subsystem for Linux to run Ubuntu 16.04 on my windows 10 home platform.
I have extracted all required directories to run KSQL on this platform.
Now, when I am trying to run any command after navigating to the bin folder. It's throwing command not found error. I tried to add PATH as well but it's not working.
Please suggest.
There's a typo in your command:
export PATH=$PATH:/opt/kafka/confleuent-5.4.0/bin
Instead of confluent-5.4.0 you misspelled it confleuent-5.4.0.
The easiest way to install Confluent CLI, is by making use of the scripted installation:
Install the Confluent CLI using this script. This command creates a
bin directory in your designated location (<path-to-directory>/bin).
The location must be in your PATH (e.g. /usr/local/bin). On Microsoft
Windows, an appropriate Linux environment may need to be installed in
order to have the curl and sh commands available, such as the Windows
Subsystem for Linux
curl -L https://cnfl.io/cli | sh -s -- -b /<path-to-directory>/bin
Finally, if you run confluent start you can get all services up and running, including KSQL (assuming that you have correct configuration files).
You could just use the path
cd bin
./kafka-topics.sh
Also, all those commands work in CMD / PowerShell as well
If you want to run KSQL, I'd suggest just using Docker
Related
On Windows 10 Pro and 11 Pro I have installed and activated Ubuntu-20.04 and Debian. Using the documentation from MS on switching those distros to a secondary drive, everything seemed to work fine. Until the WSL import command. It outputs "Access is denied". I've tried Windows Terminal, PowerShell, and even WebStorm; I get the same output.
I am running with elevated privileges but to no avail. The export works fine, I use a different name as the source file to ensure I restore the name to its original name. The wsl.conf editing looks good, everything lines up... until the import command.
I am at a loss. I've exhausted all research. Can anyone help me resolve this so I can run these from my F: drive?
Cheers,
RN
You just have to put a filename in the end, like:
wsl --export Ubuntu C:\Users\Desktop\OneDrive\Documents\ubuntu.tar
Suppose you want to import an exported distribution "ubuntu.tar".
Try to cd at the location of the .tar file before executing the wsl --import command in PowerShell (running as standard user), for example:
PS X:\> cd D:\
PS D:\> wsl --import Ubuntu_copy .\Ubuntu_copy ubuntu.tar
Executing the wsl --import command with an absolute path didn't work for me, but the above mentioned method did.
Just in case this is an ongoing issue for anyone, you need to run wsl --import not just from an Administrator account, but you need to run Powershell/cmd as Administrator, for example by right-clicking a pwsh.exe icon/shortcut and clicking "Run as administrator". If you're running as a standard user and "Run as administrator", the import will install the distro for the admin user you've chosen to run as.
The full syntax is:
wsl --import <Distro name> <Install folder> <Source .tar file>
The import syntax is as following, you should be carefull about install dir and imported tar file's arguments order:
--import <Distro> <InstallLocation> <FileName> [Options]
Imports the specified tar file as a new distribution.
The filename can be - for standard input.
Options:
--version <Version>
Specifies the version to use for the new distribution.
We are using some custom modules in our Perl automation framework which runs through Jenkin pipeline. Recently we got package not found error for all custom modules while executing test cases in AIX servers as latest Perl version is installed there . So we tried to add "PERL5LIB" in the path as mentioned in document
https://perlmaven.com/how-to-change-inc-to-find-perl-modules-in-non-standard-locations
We added "export PERL5LIB=/home/foobar/code" in /etc/profile of the AIX server and script getting executed without any issue when running from local AIX machine.
Issue:
But we have Jenkin pipeline to execute the scripts in AIX server using ssh. Now when we do SSH to the AIX server in the pipeline script the variables that we have set in /etc/profile does not load and we get package not found error.
Question: How can I load the profile in the AIX server while running it from pipeline? or is there any other way to handle this. Before executing script I want to export PERL5LIB in remote AIX server through pipeline (only once) and the I should not get package not found error.
Below solutions I have tried :
Load the /etc/profile: ssh AIX server ./etc/profile (using dot since source not working in AIX)
Adding this line "export PERL5LIB=/home/foobar/code" in .ssh/environment in AIX server and set PermitUserEnviorment yes
Appreciate any help on this.
Assign values to variables the usual way:
ssh user#host 'export PERL5LIB=/somepath; echo $PERL5LIB'
user#hosts's password:
/somepath
or
ssh user#host '. /etc/profile.local; echo $PERL5LIB'
user#hosts's password:
/somepath/from/profile
Edit:
If you have to execute multiple commands, create a script and upload it to the target computer, for example:
SCRIPTNAME=/tmp/$$.$RANDOM.script
scp myscript.sh user#host:"$SCRIPTNAME"
ssh user#host "$SCRIPTNAME"
This is solved with below changes.
Step 1: Edit ~/.ssh/environment. Add variable PERL5LIB="/path of the module/"
Step 2: Edit /etc/ssh/sshd_config. Change variable PermitUserEnvironment from no to yes. Uncomment it if commented. This will enable access of environment variables to SSH.
Step 3: Restart SSHD service. (This is imp. I had tried step 1 and 2 before also but not restarted the service so solution was not working)
We can create a script and run it before executing automation test from pipeline.
I have an application process that runs in IBM UrbanCode. The process uses a Powershell Script that uses the CloudFoundry CLI. Our application process runs on an agent on which the CloudFoundry CLI is installed and available on the Path. Strangely enough, the Powershell plugin doesn't know that the CloudFoundry CLI is on the path. Echoing out the path via the plugin itself confirms this.
Currently, our application process looks like:
Copy CloudFoundry CLI into UCD's workspace at the start of the job.
Execute various CloudFoundry commands via the following sytax: .\cf login -u foo -p bar -o baz -s bart
I want to avoid copying the client into the workspace and having to use the .\cf sytax in order to make the scripts more portable.
How can I get the Powershell plugin to respect the Agent's path?
Sounds like the user that your powershell agent is running under does not have CloudFoundry in its path. options are
1. Ensure the PATH variable is set system wide.
2. instead of copying the CloudFoundary CLI you could manually add the path to CloudFoundry before you run the script
$env:Path += ;<PATH TO CLOUDFOUNARY>
Note: this will only persist for the current session.
To test that you have CloudFoundary in the path you can use
Get-Command cf
I've installed CYGWIN on my windows server,
And now I'm trying to run it as a service so it will be running on every system startup, This is the command I tried but I get the error:
C:\cygwin64\bin>C:\cygwin64\bin\cygrunsrv.exe -I CYGWIN_SSHD -path C:\cygwin64\b
in\cygstart.exe
/usr/bin/cygrunsrv: Trailing commandline arguments not allowed
Try `/usr/bin/cygrunsrv --help' for more information.
Can anyone tell me what am I doing wrong?
I figured it out, To install Cygwin as a service I needed to run "ssh-host-config" insert all the needed definitions and this will install it as a service called "CYGWIN sshd".
i have installed fresshd in one of my windows server.now i am connect to the system through putty. It is working fine.
my issue us when i am running from command line
PuTTY.lnk -ssh -2 -P 22 username#XXX -pw pswd -m command.txt
commands given in the command.txt files are not executed it just open the putty console and it is closing.
when Running from Jenkins also same issue.
I am not sure if you are connecting to Windows from a Linux machine, or to Linux from a Windows machine. Or Windows to Windows?
PuTTY.lnk is not an executable. If you try to run that, it should produce an error 'PuTTY.lnk' is not recognized as an internal or external command. I am assuming you are running the command from a Windows machine, since you are referring to windows shortcut file (.lnk)
You need to use PuTTY.exe + the rest of your command line. Please note that unless it is in your $PATH settings, you would have to provide a full path to the .exe, for example C:\LocationOfPuttyInstall\putty.exe -ssh -2 -P 22 username#XXX -pw pswd -m command.txt. For the sake of preventing any other problem, you should also specify a correct full path to the command.txt file.
If you are not sure where your Putty is installed, on the Windows machine, do the following:
Right click your PuTTY shortcut (the PuTTY.lnk file)
Look under "Target"
That would list your full path to PuTTY.exe executable.
This should resolve your problem.
p.s.
Usually Putty is used to connect from a Windows machine to a Linux machine. From your question, it almost looks like you are trying to connect from a Windows machine to another Windows machine.
You should use PsExec windows tool for such purposes:
http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx