lsyncd: Can't run sync with rsync chown option - lsyncd

Whats incorrect about this syntax? it wont let me run the sync..
settings = {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status"
},
sync {
default.rsync,
source = "/home/path1",
target = "/home/path2",
delay = 15,
rsync = {
chown = webdev:webdev
}
}

This works for me:
rsync = {
owner = true,
group = true,
chown = "webdev:webdev"
}

Related

Azure Terraform function app deployment issue

I hope somebody can help me with this issue because I don't understand what am I doing wrong.
I am trying to build an azure function app and deploy a zip package (timer trigger) to it.
I have set this code.
resource "azurerm_resource_group" "function-rg" {
location = "westeurope"
name = "resource-group"
}
data "azurerm_storage_account_sas" "sas" {
connection_string = azurerm_storage_account.sthriprdeurcsvtoscim.primary_connection_string
https_only = true
start = "2021-01-01"
expiry = "2023-12-31"
resource_types {
object = true
container = false
service = false
}
services {
blob = true
queue = false
table = false
file = false
}
permissions {
read = true
write = false
delete = false
list = false
add = false
create = false
update = false
process = false
}
}
resource "azurerm_app_service_plan" "ASP-rg-hri-prd-scim" {
location = azurerm_resource_group.function-rg.location
name = "ASP-rghriprdeurcsvtoscim"
resource_group_name = azurerm_resource_group.function-rg.name
kind = "functionapp"
maximum_elastic_worker_count = 1
per_site_scaling = false
reserved = false
sku {
capacity = 0
size = "Y1"
tier = "Dynamic"
}
}
resource "azurerm_storage_container" "deployments" {
name = "function-releases"
storage_account_name = azurerm_storage_account.sthriprdeurcsvtoscim.name
container_access_type = "private"
}
resource "azurerm_storage_blob" "appcode" {
name = "functionapp.zip"
storage_account_name = azurerm_storage_account.sthriprdeurcsvtoscim.name
storage_container_name = azurerm_storage_container.deployments.name
type = "Block"
source = "./functionapp.zip"
}
resource "azurerm_function_app" "func-hri-prd-eur-csv-to-scim" {
storage_account_name = azurerm_storage_account.sthriprdeurcsvtoscim.name
storage_account_access_key = azurerm_storage_account.sthriprdeurcsvtoscim.primary_access_key
app_service_plan_id = azurerm_app_service_plan.ASP-rg-hri-prd-scim.id
location = azurerm_resource_group.function-rg.location
name = "func-hri-prd-csv-to-scim"
resource_group_name = azurerm_resource_group.function-rg.name
app_settings = {
"WEBSITE_RUN_FROM_PACKAGE" = "https://${azurerm_storage_account.sthriprdeurcsvtoscim.name}.blob.core.windows.net/${azurerm_storage_container.deployments.name}/${azurerm_storage_blob.appcode.name}${data.azurerm_storage_account_sas.sas.sas}"
"FUNCTIONS_EXTENSION_VERSION" = "~3"
"FUNCTIONS_WORKER_RUNTIME" = "dotnet"
}
enabled = true
identity {
type = "SystemAssigned"
}
version = "~3"
enable_builtin_logging = false
}
resource "azurerm_storage_account" "sthriprdeurcsvtoscim" {
account_kind = "Storage"
account_replication_type = "LRS"
account_tier = "Standard"
allow_blob_public_access = false
enable_https_traffic_only = true
is_hns_enabled = false
location = azurerm_resource_group.function-rg.location
name = "sthriprdeurcsvtoscim"
resource_group_name = azurerm_resource_group.function-rg.name
}
Goes without saying that terraform apply work without any error. The configurations of the function app are correct and points to the right storage account. The storage account has a container with the zip file containing my azure function code.
But when I go to the function app -> Functions, I don't see any function there.
Can please somebody help me to understand what am I doing wrong in this?
The Function app is a .net3 function
When you create a function app, it isn’t set up for Functions + Terraform. It’s set up for a Visual Code + Functions deployment. We need to adjust both the package.json so that it will produce the ZIP file for us, and the .gitignore so that it ignores the Terraform build files. I use a bunch of helper NPM packages:
azure-functions-core-tools for the func command.
#ffflorian/jszip-cli to ZIP my files up.
mkdirp for creating directories.
npm-run-all and particularly the run-s command for executing things in order.
rimraf for deleting things.
Below is the code how package.json looks like
{
"name": "backend",
"version": "1.0.0",
"description": "",
"scripts": {
"func": "func",
"clean": "rimraf build",
"build:compile": "tsc",
"build:prune": "npm prune --production",
"prebuild:zip": "mkdirp --mode=0700 build",
"build:zip": "jszip-cli",
"build": "run-s clean build:compile build:zip",
"predeploy": "npm run build",
"deploy": "terraform apply"
},
"dependencies": {
},
"devDependencies": {
"azure-functions-core-tools": "^2.7.1724",
"#azure/functions": "^1.0.3",
"#ffflorian/jszip-cli": "^3.0.2",
"mkdirp": "^0.5.1",
"npm-run-all": "^4.1.5",
"rimraf": "^3.0.0",
"typescript": "^3.3.3"
}
}
npm run build will build the ZIP file.
npm run deploy will build the ZIP file and deploy it to Azure.
For complete information check Azure Function app with Terraform.

run my test in docker mongo instance using jenkins pipeline

I would like to run my tests against a Docker MongoDB instance using Jenkins pipeline. I have got it working kind of. My problem is the tests are running within the Mongo container. I just want it to load up a container and my tests for it to connect to the Monogo container. At the moment it downloads Gradle within the container and takes about 5 min to run. Hope that makes sense. Here is my JenkinsFile
#!/usr/bin/env groovy
pipeline {
environment {
SPRING_PROFILES_ACTIVE = "jenkins"
}
agent {
node {
label "jdk8"
}
}
parameters {
choice(choices: 'None\nBuild\nMinor\nMajor', description: '', name: 'RELEASE_TYPE')
string(defaultValue: "refs/heads/master:refs/remotes/origin/master", description: 'gerrit refspec e.g. refs/changes/45/12345/1', name: 'GERRIT_REFSPEC')
choice(choices: 'master\nFETCH_HEAD', description: 'gerrit branch', name: 'GERRIT_BRANCH')
}
stages {
stage("Test") {
stages {
stage("Initialise") {
steps {
println "Running on ${NODE_NAME}, release type: ${params.RELEASE_TYPE}"
println "gerrit refspec: ${params.GERRIT_REFSPEC}, branch: ${params.GERRIT_BRANCH}, event type: ${params.GERRIT_EVENT_TYPE}"
checkout scm
sh 'git log -n 1'
}
}
stage("Verify") {
agent {
dockerfile {
filename 'backend/Dockerfile'
args '-p 27017:27017'
label 'docker-pipeline'
dir './maintenance-notifications'
}
}
steps {
sh './gradlew :maintenance-notifications:backend:clean'
sh './gradlew :maintenance-notifications:backend:check :maintenance-notifications:backend:test'
}
post {
always {
junit 'maintenance-notifications/backend/build/test-results/**/*.xml'
}
}
}
}
}
stage("Release") {
when {
expression {
return params.RELEASE_TYPE != '' && params.RELEASE_TYPE != 'None';
}
}
steps {
script {
def gradleProps = readProperties file: "gradle.properties"
def isCurrentSnapshot = gradleProps.version.endsWith("-SNAPSHOT")
def newVersion = gradleProps.version.replace("-SNAPSHOT", "")
def cleanVersion = newVersion.tokenize(".").collect{it.toInteger()}
if (params.RELEASE_TYPE == 'Build') {
newVersion = "${cleanVersion[0]}.${cleanVersion[1]}.${isCurrentSnapshot ? cleanVersion[2] : cleanVersion[2] + 1}"
} else if (params.RELEASE_TYPE == 'Minor') {
newVersion = "${cleanVersion[0]}.${cleanVersion[1] + 1}.0"
} else if (params.RELEASE_TYPE == 'Major') {
newVersion = "${cleanVersion[0] + 1}.0.0"
}
def newVersionArray = newVersion.tokenize(".").collect{it.toInteger()}
def newSnapshot = "${newVersionArray[0]}.${newVersionArray[1]}.${newVersionArray[2] + 1}-SNAPSHOT"
println "release version: ${newVersion}, snapshot version: ${newSnapshot}"
sh "./gradlew :maintenance-notifications:backend:release -Prelease.useAutomaticVersion=true -Prelease.releaseVersion=${newVersion} -Prelease.newVersion=${newSnapshot}"
}
}
}
}
}
and here is my Dockerfile
FROM centos:centos7
ENV container=docker
RUN mkdir -p /usr/java; curl http://configuration/yum/thecloud/artifacts/java/jdk-8u151-linux-x64.tar.gz|tar zxC /usr/java && ln -s /usr/java/jdk1.8.0_151/bin/j* /usr/bin
RUN mkdir -p /usr/mongodb; curl http://configuration/yum/thecloud/artifacts/mongodb/mongodb-linux-x86_64-3.4.10.tgz|tar zxC /usr/mongodb && ln -s /usr/mongodb/mongodb-linux-x86_64-3.4.10/bin/* /usr/bin
ENV JAVA_HOME /usr/java/jdk1.8.0_151/
ENV SPRING_PROFILES_ACTIVE jenkins
RUN yum -y install git.x86_64 && yum clean all
# Set up directory requirements
RUN mkdir -p /data/db /var/log/mongodb /var/run/mongodb
VOLUME ["/data/db", "/var/log/mongodb"]
# Expose port 27017 from the container to the host
EXPOSE 27017
CMD ["--port", "27017", "--pidfilepath", "/var/run/mongodb/mongod.pid"]
# Start mongodb
ENTRYPOINT /usr/bin/mongod

local conf play framework

I have application.conf
{
name {
postgres {
host = ""
username = ""
password = ""
}
}
}
And I want to add my local.conf
{
name {
postgres {
host = "blabla"
username = "aa"
password = "bb"
}
}
}
name.postgres.host.override = "" - doesn't work
In your application.conf, it will remain the same:
{
name {
postgres {
host = ""
username = ""
password = ""
}
}
}
And in your local.conf, you should include application.conf like this:
include "application.conf"
{
name {
postgres {
host = "blabla"
username = "aa"
password = "bb"
}
}
}
When running the sbt you should specifically mention to load local.conf like this (Or else application.conf will get loaded by default):
sbt run -Dconfig.resource=local.conf
With that, local.conf will be extended from application.conf. The value from local.conf will be picked if there is any key that is exists in both files.
Now, you would get:
name.postgres.host=blabla
you can include a conf file in a another conf file like this :
Screen
this will automatically override variables.

how to create a multi-node redshift cluster only for prod using Terraform

I have 2 redshift cluster prod and dev , i am using the same terraform module.
How can i have 2 nodes only for prod cluster . Please let me know the what is the interpolation syntax i should be using
variable "node_type" {
default = "dc1.large"
}
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "single-node" ==> multi node
number_of_nodes = 2 ==> only for prod
Use the map type:
variable "node_type" {
default = "dc1.large"
}
variable "env" {
default = "development"
}
variable "redshift_cluster_type" {
type = "map"
default = {
development = "single-node"
production = "multi-node"
}
}
variable "redshift_node" {
type = "map"
default = {
development = "1"
production = "2"
}
}
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "${var.redshift_cluster_type[var.env]}"
number_of_nodes = "${var.redshift_node[var.env]}"
}
Sometime I am lazy, and just do this
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "${var.env == "production" ? "multi_node" : "single_node" }"
number_of_nodes = "${var.env == "production" ? 2 : 1 }"
}

NixOS Error on Declarative User Create

New to NixOS, and trying out a basic setup, including adding a new user. I'm sure this is a simple fix. I just need to know what setting to put in. Pastebin details are here.
These are the partial contents of /etc/nixos/configuration.nix. I created my nixos from this stock vagrant box: https://github.com/zimbatm/nixbox .
{
...
users = {
extraGroups = [ { name = "vagrant"; } { name = "twashing"; } { name = "vboxsf"; } ];
extraUsers = [ {
description = "Vagrant User";
name = "vagrant";
...
}
{ description = "Main User";
name = "user1";
group = "user1";
extraGroups = [ "users" "vboxsf" "wheel" "networkmanager"];
home = "/home/user1";
createHome = true;
useDefaultShell = true;
} ];
};
...
}
And these are the errors when calling nixos-rebuild switch , to rebuild environment. user1 doesn't seem to get added properly. And I certainly can't su to it after the command is run. How do I declaratively create users, and set their groups?
$ sudo nixos-rebuild switch
building Nix...
building the system configuration...
updating GRUB 2 menu...
stopping the following units: local-fs.target, network-interfaces.target, remote-fs.target
activating the configuration...
setting up /etc...
useradd: group 'networkmanager' does not exist
chpasswd: line 1: user 'user1' does not exist
chpasswd: error detected, changes ignored
id: user1: no such user
id: user1: no such user
/nix/store/1r443r7imrzl4mgc9rw1fmi9nz76j3bx-nixos-14.04.393.6593a98/activate: line 77: test: 0: unary operator expected
chown: missing operand after ‘/home/user1’
Try 'chown --help' for more information.
/nix/store/1r443r7imrzl4mgc9rw1fmi9nz76j3bx-nixos-14.04.393.6593a98/activate: line 78: test: 0: unary operator expected
chgrp: missing operand after ‘/home/user1’
Try 'chgrp --help' for more information.
starting the following units: default.target, getty.target, ip-up.target, local-fs.target, multi-user.target, network-interfaces.target, network.target, paths.target, remote-fs.target, slices.target, sockets.target, swap.target, timers.target