Do i need to install rundeck software before using the rundeck-cli - rundeck

This is my first time using Rundeck. Do I need to install the Rundeck software (GUI version) on linux before I use the CLI version?

Technically no, you can install Rundeck CLI without Rundeck, but Rundeck CLI must interact with a Rundeck instance, for example: you can install Rundeck CLI on your laptop pointing to a remote Rundeck server (Docker, Windows, Linux, etc) and use Rundeck CLI from your machine.
Rundeck CLI is a Rundeck API abstraction client.
You can configure RD CLI against your Rundeck instance following this.

Related

Set up bash inside self-hosted Windows agents in Azure DevOps

Microsoft's own documentation provides the links to the images used for various operating systems, on top of which Microsoft-hosted agents get created.
For Windows Server 2019, the link shows bash as one of the tools included, and it also mentions WSL1 (Windows Subsystem for Linux v1) as installed. And it works just as expected, with Bash tasks running just fine inside Azure DevOps pipelines.
We're currently in the process of setting up our own self-hosted Windows agents, and we're looking for this capability as well. But to my knowledge, having Linux tools such as bash working on Windows requires 1) WSL installed and 2) a Linux distribution installed per a specific user. The procedure for deploying on Windows Server is here.
WSL doesn't currently have multiple-user support (GitHub issue here) and trying to run Linux tools as LOCAL SYSTEM presents challenges of their own. So in this context, how does the image used by the Microsoft-hosted Azure DevOps agents allow them to seamlessly run bash?
I heard about Cygwin, and know that it can provide similar functionality, but for now I'm trying to get bash configured similar to how it's done on Microsoft's own hosted agents.
As of this time, however, I think it is not supported running bash in Azure DevOps self-hosted Windows agent.
The Bash task runs on the agent as the user "NT Authority \ Network Service". However, we cannot install Linux distribution for this user. It will show that the user haven't logged in.
But for Microsoft, its virtual machines should have a specific user from whom bash starts rather than the default NT Authority \ Network Service.

Using a Azure Devops python artifact repo on a Microsoft machine learning server

I have a SQL Server 2017 instance with Machine Learning services install in database. I have a custom module that I have a wheels package built and published to a Azure Devops python artifact repo that I can install from other machines using the Azure Artifacts keyring module to authenticate.
I want to setup my machine learning server so I can pip install from this azure devops package repo, but after I install the keyring and artifacts-keyring modules per the documentation and try to pip install with the -i option to specify the url to my azure devops package repo I get prompted to authenticate with my username/password. This is different behavior on my development machines (and does not work), on those machines the keyring modules authenticate me automatically.
Looking at the github page for the artifacts-keyring module it looks like I need pip 19.2 or greater, and the machine learning server has pip 9.0.1. Running .\pip.exe install --upgrade pip from the PYTHON_SERVICES directory gives me an error:
The system cannot move the file to a different disk drive: 'e:\\program files\\microsoft sql server\\mssql14.mssqlserver\\python_services\\scripts\\pip.exe' -> 'C:\\Users\\username\\AppData\\Local\\Temp\\7\\pip-qxx3khcz-uninstall\\program files\\microsoft sql server\\mssql14.mssqlserver\\python_services\\scripts\\pip.exe
Going further down the rabbit hole, it looks like i might need to unbind/bind the updated binaries. Has anyone configured their MS machine learning server to use a azure devops python artifact repo as a pip index? Should I approach deploying my modules a different way?
What I did which worked for me:
Stop all of the SQL server services. I think I would have only needed to stop the Jumpstart service though.
Run the basic get-pip.py script from the PYTHON_SERVICES directory that the ML server is using. This installed the latest version of pip, as verified with .\Scripts\pip.exe -V
I then ran .\Scripts\pip.exe install keyring artifacts-keyring
I then installed my module from my index/repo .\Scripts\pip.exe install -i https://myIndexURL/ MyModule
Brought all the SQL services up and confirmed I can use my module.

confluent CLI - windows environment

I am exploring to see if we could run confluent on windows. As per the following articles, it seems windows is not supported.
https://docs.confluent.io/current/installation/versions-interoperability.html#operating-systems
Confluent Platform in Windows
However, when I look at confluent CLI, windows seems to be supported
https://docs.confluent.io/current/cli/installing.html#tarball-installation
But again, there is a phrase here about windows is not being supported.
On non-Windows platforms, the Confluent CLI offers confluent local commands (designed to operate on a local install of Confluent Platform) which require Java, and JDK version 1.8 or 1.11 is recommended. If you have multiple versions of Java installed, set JAVA_HOME to the version you want Confluent Platform to use.
So, the questions are
Is windows supported, as per latest ? ( I doubt it is not ?)
What is the CLI that is being supported for windows ? For what it
could be used for ?
It also seems windows is NOT supported for local development perspective as well ? ( I mean is it possible to issue "confluent local" commands ?
PS : Please give inputs without referring to virtualized environments such as Docker
Yes, you are right windows is not supported.
The CLI you get for windows is only to manage and retrieve metadata for the remote confluent platform. First, you will have to log in to confluent by issuing command confluent.exe login --url <url>.
More info at confluent-login.
Following are the commands you get with confluent windows distribution:
Available Commands:
audit-log Manage audit log configuration.
cluster Retrieve metadata about Confluent Platform clusters.
completion Print shell completion code.
connect Manage Connect.
help Help about any command
iam Manage RBAC, ACL and IAM permissions.
kafka Manage Apache Kafka.
ksql Manage ksqlDB applications.
login Log in to Confluent Platform (required for RBAC).
logout Log out of Confluent Platform.
schema-registry Manage Schema Registry.
secret Manage secrets for Confluent Platform.
update Update the Confluent CLI.
version Print the Confluent CLI version.
And windows is also not supported for local development. You can't issue confluent commands like confluent local
I'm facing same challenge. I got Confluent Platform on Docker/Windows 10 machine nad wany to access CLI using WSL as stated here:
https://docs.confluent.io/current/cli/installing.html
The issue is that when running commands in ubuntu terminal I get unknown command when triggering confluent.
Confluent CLI are facade for local installation of confluent variants of Kafka, were with local command you can manage your local installation.
Look here : Confluent CLI local documentation
It assumes that you got the product installed locally. I had installed by following this page Confluent Ubuntu local installation and got all components working, well almost.
So it can work on Windows 10 but throw WSL only. There are some explanations how to install Kafka on Windows, but hole idea behind Confluent is to use Confluent Cloud for production environments.

Is there an Ansible Playbook for provisioning an OS using ESX/i?

Is their a way to provision an OS(Centos/Redhat) when using a licensed VMware vSphere server (with ESX/i) using ansible?
The vsphere_guest module will allow you to provision a guest VM through Ansible. If you want to do everything automatically via Ansible then you probably want to have the guest launch a kickstart to automate the install of linux onto the VM, and once that's complete then you can use Ansible to perform any customizations to the environment that you desire.
Official VMware python bindings https://github.com/vmware/pyvmomi. Ansible uses pysphere. I thought that you can develop ansible module:
Ansible modules are reusable units of magic that can be used by the
Ansible API, or by the ansible or ansible-playbook programs. The
directory ”./library”, alongside your top level playbooks, is also
automatically added as a search directory.
http://docs.ansible.com/ansible/developing_modules.html#tutorial
Or take unofficial ansible module build on pyvmomi:
- https://github.com/ViaSat/ansible-vsphere
But as I see here: https://github.com/sijis/pyvmomi-examples/blob/master/create-vm.py you need vCenter.
There are vagrant plugins but I don't think that you can do provision on ESXi without vCenter.

Chef running from the outside

is there a way to provision a server with Chef without having Ruby installed on the server that is going to be provisioned?
Basically in the same way that Capistrano is used to provision a server?
thx
ciao robertj
You can use the Chef Omnibus Installer to install a native package for your OS including Chef Client and needed Ruby runtime packaged into one - but Ruby is always needed to be installed.
Currently Chef is not able to remotely provision a server via ssh orso.