NameError: name 'dbutils' is not defined - pyspark

I've .py file with following code line and it lives in git.
dbutils.widgets.text(name='CORPORATION_ID', defaultValue='1234')
I am using mlflow to run it in remote databricks job cluster. I've conda.yml and MLProject file to pick it up from git and run it in databricks job cluster but I am getting following error.
File "tea/src/cltv_xgb_tea.py", line 40, in <module>
dbutils.widgets.text(name='CORPORATION_ID', defaultValue='1234')
NameError: name 'dbutils' is not defined
Any help/solution is much appreciated.
My current files in git
Conda.yml has
name: cicd-environment
channels:
- defaults
dependencies:
- python=3.7
- pip=19.0.3
- pip:
- mlflow==1.7.2
- DBUtils==1.3
- ipython==7.14.0
- databricks-connect==6.5.1
- invoke==1.4.1
- awscli==1.18.87

Related

Saltstack Config job returns "The minion has not yet returned"

I am new to saltstack and I am trying to run a job using the SaltStack Config.
I have a master and minion(Windows machine).
the init.sls file is the following:
{% set machineName = salt['pillar.get']('machineName', '') %}
C:\\Windows\\temp\\salt\\scripts:
file.recurse:
- user: Administrator
- group: Domain Admins
- file_mode: 755
- source: salt://PROJECTNAME/DNS/scripts
- makedirs: true
run-multiple-files:
cmd.run:
- cwd: C:\\Windows\\temp\\salt\\scripts
- names:
- ./dns.bat {{ machineName }}
and the dns.bat file:
Get-DnsServerResourceRecordA -ZoneName corp.local
The main idea is to create DNS record, but for now I am trying to run only this command to check things out, but I get the following info message when I run the job:
"The minion has not yet returned. Please try again later."
I went to check out in the master and ran the command: salt-run manage.status and got the following:
salt-run manage.status
/usr/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.23) or chardet (4.0.0) doesn't match a supported version!
RequestsDependencyWarning)
down:
- machine1
- machine2
- machine3
up:
- saltmaster
I tried some commands, to restart the machines, but still no success.
Any help would be appreciated! Thanks in advance!

Where actually is the syntax error in my github actions yml file

I am actually implementing CI/CD for my application. I want to start the application automatically using pm2. So I am getting the syntax error on line 22.
This is my yml file
This is the error I am getting on github
The problem in the syntax here is related to how you used the - symbol.
With Github actions, you need at least a run or uses field inform for each step inside your job, at the same level of the name field (which is not mandarory), otherwise the github interpreter will return an error.
Here, from line 22, you used something like this:
- name: ...
- run: ...
- run: ...
- run: ...
So there are two problems:
First, the name and the run field aren't at the same yaml level.
Second, your step with the name field doesn't have a run or uses field associated with it (you need at least one of them).
The correct syntax should be:
- name: ...
run: ...
- run: ...
- run: ...
Reference about workflow syntax

How to deploy custom rpms on to salt-minion?

I'm working on salt-stack for setting up multiple machines, I wanted to ask how can we deploy rpms(placed at a custom location in master) on to the minions? I already have an idea of how can we install packages using top.sls file and name of the package that needs to be installed on minions but what I'm looking for is to deploy my custom rpms on to the minions from master.
There are two ways to approach this:
Option 1:
Define the list of RPMs in a pillar file:
package_names:
- custom-rpm1: custom-rpm1-2.6.1-2.el7.x86_64.rpm
- custom-rpm2: custom-rpm2-release-el7-3.noarch.rpm
- custom-rpm3: custom-rpm3-latest.noarch.rpm
Then in an SLS file:
install-rpm:
pkg.installed:
- sources: {{ pillar['package_names'] }}
Option 2:
Copy the directory containing the RPMs (salt://rpms in below example is relative to file_roots) to target machine and use rpm command to install (with wildcard):
copy-rpms-dir:
file.recurse:
- name: /tmp/rpms
- source: salt://rpms
install-rpms:
cmd.run:
- name: rpm -ivh /tmp/rpms/*.rpm
- success_retcodes:
- 2
Installing with rpm command requires extra check for return codes as it returns non-zero (2) when RPM is already installed.

Docker composer parsing error for docker-compose.yml, at line 1, and column 1

I'm trying to setup selenoid and I'm having trouble with dockercomposer, it throws exception as below:
yaml.parser.ParserError: while parsing a flow mapping in "./docker-compose.yml", line 1, column 1 expected ',' or '}', but got '{' in "./docker-compose.yml", line 2, column 1
when I try to run docker command "$ sudo docker-compose up -d"
I'm in the terminal with same folder where docker-compose.yml present and content is as below,
version: '3'
services:
selenoid:
network_mode: bridge
image: aerokube/selenoid
volumes:
- "/docker:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "/docker/video:/opt/selenoid/video"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=/opt/selenium/video
- TZ=Europe/Amsterdam
command: ["-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video"]
ports:
- "4444:4444"
I've also tried many online yml parsers and there is nothing wrong with yml file. Any help would be much appreciated.
Thanks
Sorry being late to answer my own post, there was nothing wrong with YML file, I should have created/edited it using programming editors such as IntelliJ, but I was editing and saving it using EditText. All worked fine after editing them and saving using IntelliJ in mac.

cloud-init execution order doesn't respect /etc/cloud/cloud.cfg?

This is the content of /etc/cloud/cloud.cfg of Ubuntu cloud 16.04 image:
# The top level settings are used as module
# and system configuration.
# A set of users which may be applied and/or used by various modules
# when a 'default' entry is found it will reference the 'default_user'
# from the distro configuration specified below
users:
- default
# If this is set, 'root' will not be able to ssh in and they
# will get a message to login instead as the above $user (ubuntu)
disable_root: true
# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: false
# Example datasource config
# datasource:
# Ec2:
# metadata_urls: [ 'blah.com' ]
# timeout: 5 # (defaults to 50 seconds)
# max_wait: 10 # (defaults to 120 seconds)
# The modules that run in the 'init' stage
cloud_init_modules:
- migrator
- ubuntu-init-switch
- seed_random
- bootcmd
- write-files
- growpart
- resizefs
- disk_setup
- mounts
- set_hostname
- update_hostname
- update_etc_hosts
- ca-certs
- rsyslog
- users-groups
- ssh
# The modules that run in the 'config' stage
cloud_config_modules:
# Emit the cloud config ready event
# this can be used by upstart jobs for 'start on cloud-config'.
- emit_upstart
- snap_config
- ssh-import-id
- locale
- set-passwords
- grub-dpkg
- apt-pipelining
- apt-configure
- ntp
- timezone
- disable-ec2-metadata
- runcmd
- byobu
# The modules that run in the 'final' stage
cloud_final_modules:
- snappy
- package-update-upgrade-install
- fan
- landscape
- lxd
- puppet
- chef
- salt-minion
- mcollective
- rightscale_userdata
- scripts-vendor
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
# System and/or distro specific settings
# (not accessible to handlers/transforms)
system_info:
# This will affect which distro class gets used
distro: ubuntu
# Default user name + that default users groups (if added/used)
default_user:
name: ubuntu
lock_passwd: True
gecos: Ubuntu
groups: [adm, audio, cdrom, dialout, dip, floppy, lxd, netdev, plugdev, sudo, video]
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
shell: /bin/bash
# Other config here will be given to the distro class and/or path classes
paths:
cloud_dir: /var/lib/cloud/
templates_dir: /etc/cloud/templates/
upstart_dir: /etc/init/
package_mirrors:
- arches: [i386, amd64]
failsafe:
primary: http://archive.ubuntu.com/ubuntu
security: http://security.ubuntu.com/ubuntu
search:
primary:
- http://%(ec2_region)s.ec2.archive.ubuntu.com/ubuntu/
- http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
- http://%(region)s.clouds.archive.ubuntu.com/ubuntu/
security: []
- arches: [armhf, armel, default]
failsafe:
primary: http://ports.ubuntu.com/ubuntu-ports
security: http://ports.ubuntu.com/ubuntu-ports
ssh_svcname: ssh
As you can see, package-update-upgrade-install is put in final stage, where runcmd is put in config stage. According to cloud-init document, modules in config stage are executed before final stage. As I understand, runcmd will be executed before package install.
However, the following code runs without any error:
packages:
- shorewall
runcmd:
- echo "printing shorewall version"
- shorewall version
That means runcmd can be executed after package install.
Is there any reason that make cloud-init disrespect the execution order defined in /etc/cloud/cloud.cfg?
While investigating how to get cloud-init to run things earlier in the boot process, I saw this too. In my testing, it appeared to me that runcmd was running in the config stage as you would expect, but all it was doing was creating a shell script from the runcmd data, which it put in /var/lib/cloud/instance/scripts/runcmd. Cloud-init then ran the shell script during the scripts-user module in the final stage. Below are bits from the /var/log/cloud-init.log log showing this:
"Mar 15 17:12:24 cloud-init[2796]: stages.py[DEBUG]: Running module runcmd (<module 'cloudinit.config.cc_runcmd' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_runcmd.pyc'>) with frequency once-per-instance",
"Mar 15 17:12:24 cloud-init[2796]: util.py[DEBUG]: Writing to /var/lib/cloud/instances/i-xxx/sem/config_runcmd - wb: [644] 20 bytes",
"Mar 15 17:12:24 cloud-init[2796]: helpers.py[DEBUG]: Running config-runcmd using lock (<FileLock using file '/var/lib/cloud/instances/i-xxx/sem/config_runcmd'>)",
"Mar 15 17:12:24 cloud-init[2796]: util.py[DEBUG]: Shellified 1 commands.",
"Mar 15 17:12:24 cloud-init[2796]: util.py[DEBUG]: Writing to /var/lib/cloud/instances/i-xxx/scripts/runcmd - wb: [700] 50 bytes",
...
"Mar 15 17:12:40 cloud-init[2945]: stages.py[DEBUG]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) with frequency once-per-instance",
"Mar 15 17:12:40 cloud-init[2945]: util.py[DEBUG]: Writing to /var/lib/cloud/instances/i-xxx/sem/config_scripts_user - wb: [644] 20 bytes",
"Mar 15 17:12:40 cloud-init[2945]: helpers.py[DEBUG]: Running config-scripts-user using lock (<FileLock using file '/var/lib/cloud/instances/i-xxx/sem/config_scripts_user'>)",
"Mar 15 17:12:40 cloud-init[2945]: util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=True, capture=False)",
Hope this helps...