NixOS Error on Declarative User Create - operating-system

New to NixOS, and trying out a basic setup, including adding a new user. I'm sure this is a simple fix. I just need to know what setting to put in. Pastebin details are here.
These are the partial contents of /etc/nixos/configuration.nix. I created my nixos from this stock vagrant box: https://github.com/zimbatm/nixbox .
{
...
users = {
extraGroups = [ { name = "vagrant"; } { name = "twashing"; } { name = "vboxsf"; } ];
extraUsers = [ {
description = "Vagrant User";
name = "vagrant";
...
}
{ description = "Main User";
name = "user1";
group = "user1";
extraGroups = [ "users" "vboxsf" "wheel" "networkmanager"];
home = "/home/user1";
createHome = true;
useDefaultShell = true;
} ];
};
...
}
And these are the errors when calling nixos-rebuild switch , to rebuild environment. user1 doesn't seem to get added properly. And I certainly can't su to it after the command is run. How do I declaratively create users, and set their groups?
$ sudo nixos-rebuild switch
building Nix...
building the system configuration...
updating GRUB 2 menu...
stopping the following units: local-fs.target, network-interfaces.target, remote-fs.target
activating the configuration...
setting up /etc...
useradd: group 'networkmanager' does not exist
chpasswd: line 1: user 'user1' does not exist
chpasswd: error detected, changes ignored
id: user1: no such user
id: user1: no such user
/nix/store/1r443r7imrzl4mgc9rw1fmi9nz76j3bx-nixos-14.04.393.6593a98/activate: line 77: test: 0: unary operator expected
chown: missing operand after ‘/home/user1’
Try 'chown --help' for more information.
/nix/store/1r443r7imrzl4mgc9rw1fmi9nz76j3bx-nixos-14.04.393.6593a98/activate: line 78: test: 0: unary operator expected
chgrp: missing operand after ‘/home/user1’
Try 'chgrp --help' for more information.
starting the following units: default.target, getty.target, ip-up.target, local-fs.target, multi-user.target, network-interfaces.target, network.target, paths.target, remote-fs.target, slices.target, sockets.target, swap.target, timers.target

Related

Jenkins dynamic choice parameter to read a ansible host file in github

I have an ansible host file that is stored in GitHub and was wondering if there is a way to list out all the host in jenkins with choice parameters? Right now every time I update the host file in Github I would have to manually go into each Jenkins job and update the choice parameter manually. Thanks!
I'm assuming your host file has content something similar to below.
[client-app]
client-app-preprod-01.aws-xxxx
client-app-preprod-02.aws
client-app-preprod-03.aws
client-app-preprod-04.aws
[server-app]
server-app-preprod-01.aws
server-app-preprod-02.aws
server-app-preprod-03.aws
server-app-preprod-04.aws
Option 01
You can do something like the one below. Here you can first checkout the repo and then ask for the user input. I have implemented the function getHostList() to parse the host file to filter the host entries.
pipeline {
agent any
stages {
stage('Build') {
steps {
git 'https://github.com/jglick/simple-maven-project-with-tests.git'
script {
def selectedHost = input message: 'Please select the host', ok: 'Next',
parameters: [
choice(name: 'PRODUCT', choices: getHostList("client-app","ansible/host/location"), description: 'Please select the host')]
echo "Host:::: $selectedHost"
}
}
}
}
}
def getHostList(def appName, def filePath) {
def hosts = []
def content = readFile(file: filePath)
def startCollect = false
for(def line : content.split('\n')) {
if(line.contains("["+ appName +"]")){ // This is a starting point of host entries
startCollect = true
continue
} else if(startCollect) {
if(!line.allWhitespace && !line.contains('[')){
hosts.add(line.trim())
} else {
break
}
}
}
return hosts
}
Option 2
If you want to do this without checking out the source and with Job Parameters. You can do something like the one below using the Active Choice Parameter plugin. If your repository is private, you need to figure out a way to generate an access token to access the Raw GitHub link.
properties([
parameters([
[$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'Select the Host',
name: 'Host',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: false,
script:
'return [\'Could not get Host\']'
],
script: [
classpath: [],
sandbox: false,
script:
'''
def appName = "client-app"
def content = new URL ("https://raw.githubusercontent.com/xxx/sample/main/testdir/hosts").getText()
def hosts = []
def startCollect = false
for(def line : content.split("\\n")) {
if(line.contains("["+ appName +"]")){ // This is a starting point of host entries
startCollect = true
continue
} else if(startCollect) {
if(!line.allWhitespace && !line.contains("[")){
hosts.add(line.trim())
} else {
break
}
}
}
return hosts
'''
]
]
]
])
])
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
echo "Host:::: ${params.Host}"
}
}
}
}
}
Update
When you are calling a private repo, you need to send a Basic Auth header with the access token. So use the following groovy script instead.
def accessToken = "ACCESS_TOKEN".bytes.encodeBase64().toString()
def get = new URL("https://raw.githubusercontent.com/xxxx/something/hosts").openConnection();
get.setRequestProperty("authorization", "Basic " + accessToken)
def content = get.getInputStream().getText()

Terraform Error creating Topic: googleapi: Error 403: User not authorized to perform this action

Googleapi: Error 403: User not authorized to perform this action
provider "google" {
project = "xxxxxx"
region = "us-central1"
}
resource "google_pubsub_topic" "gke_cluster_upgrade_notifications" {
name = "cluster-notifications"
labels = {
foo = "bar"
}
message_storage_policy {
allowed_persistence_regions = [
"region",
]
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "xxxxxx-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "function_script_zip" {
type = "zip"
source_dir = "./function/"
output_path = "./function/main.py.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "function_script_zip" {
name = "main.py.zip"
bucket = google_storage_bucket.source_code.name
source = "./function/main.py.zip"
}
resource "google_cloudfunctions_function" "gke_cluster_upgrade_notifications" {---
-------
}
The service account has the owner role attached
Also tried using
1.export GOOGLE_APPLICATION_CREDENTIALS={{path}}
2.credentials = "${file("credentials.json")}" by place json file in terraform root folder.
It seems that the used account is missing some permissions (e.g. pubsub.topics.create) to create the Cloud Pub/Sub topic. The owner role should be sufficient to create the topic, as it contains the necessary permissions (you can check this here). Therefore, a wrong service account might be set in Terraform.
To address these IAM issues I would suggest:
Use the Policy Troubleshooter.
Impersonate service account and do the API call using CLI with --verbosity=debug flag, which will provide helpful information about the missing permissions.

Terraform GCP unable to run metadata command for windows instance to create a user

Trying to create a user which can be used in connection to move some files , when i try to create a user while creating a instance using metadata resource get created successfully but metadata command is not executed.
`resource "google_compute_instance" "win-dev-instance" {
project = "my_pro_1"
zone = "eu-west2-b"
name = "win-dev-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "windows-server-2016-r2-dc-v20191210"
}
}
network_interface {
network = "default"
access_config {
}
}
metadata {
windows-startup-script-cmd = "net user /add devuser PASSWORD & net localgroup adminstrators devuser /add"
}
}`
In your example, there is a typo adminstrators, it should be administrators.
Solution
resource "google_compute_instance" "win-dev-instance" {
project = "my_pro_1"
zone = "eu-west2-b"
name = "win-dev-instance"
machine_type = "n1-standard-2"
boot_disk {
initialize_params {
image = "windows-server-2016-dc-v20191210"
}
}
network_interface {
network = "default"
access_config {}
}
metadata = {
windows-startup-script-cmd = "net user /add devuser Abc123123 & net localgroup administrators devuser /add"
}
}
I test without success the solution based on windows-startup-script-cmd. Also this script will be excuted every time the instance restart.
I think the best solution is to use the metadata with the windows-keys key as described here. The solution is to generate a double key, one will be used by gcp to generate the password and the second to decrypt it on reception

Create schema for Google Cloud SQL PostgreSQL database using Terraform

I'm new to Terraform, and I want to create a schema for the postgres database created on a PostgreSQL 9.6 instance on Google cloud SQL.
To create the PostgreSQL instance I have this on main.tf:
resource "google_sql_database_instance" "my-database" {
name = "my-${var.deployment_name}"
database_version = "POSTGRES_9_6"
region = "${var.deployment_region}"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
}
}
}
The I was trying to create a PostgreSQL object like this:
provider "postgresql" {
host = "${google_sql_database_instance.my-database.ip_address}"
username = "postgres"
}
Finally creating the schema:
resource "postgresql_schema" "my_schema" {
name = "my_schema"
owner = "postgres"
}
However, this configurations do not work, we I run terraform plan:
Inappropriate value for attribute "host": string required.
If I remove the Postgres object:
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: dial tcp :5432: connect: connection refused
Additionally, I would like to add a password for the user postgres which is created by default when the PostgreSQL instance is created.
EDITED:
versions used
Terraform v0.12.10
+ provider.google v2.17.0
+ provider.postgresql v1.2.0
Any suggestions?
There are a few issues with the terraform set up that you have above.
Your instance does not have any authorized networks defined. You should change your instance resource to look like this: (Note: I used 0.0.0.0/0 just for testing purposes)
resource "google_sql_database_instance" "my-database" {
name = "my-${var.deployment_name}"
database_version = "POSTGRES_9_6"
region = "${var.deployment_region}"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "all"
value = "0.0.0.0/0"
}
}
}
depends_on = [
"google_project_services.vpc"
]
}
As mentioned here, you need to create a user with a strong password
resource "google_sql_user" "user" {
name = "test_user"
instance = "${google_sql_database_instance.my-database.name}"
password = "VeryStrongPassword"
depends_on = [
"google_sql_database_instance.my-database"
]
}
You should use the "public_ip_address" or "ip_address.0.ip_address" attribute of your instance to access the ip address. Also, you should update your provider and schema resource to reflect the user created above.
provider "postgresql" {
host = "${google_sql_database_instance.my-database.public_ip_address}"
username = "${google_sql_user.user.name}"
password = "${google_sql_user.user.password}"
}
resource "postgresql_schema" "my_schema" {
name = "my_schema"
owner = "test_user"
}
Your postgres provider is dependent on the google_sql_database_instance resource to be done before it is able to set up the provider:
All the providers are initialized at the beginning of plan/apply so if one has an invalid config (in this case an empty host) then Terraform will fail.
There is no way to define the dependency between a provider and a
resource within another provider.
There is however a workaround by using the target parameter
terraform apply -target=google_sql_user.user
This will create the database user (as well as all its dependencies - in this case the database instance) and once that completes follow it with:
terraform apply
This should then succeed as the instance has already been created and the ip_address is available to be used by the postgres provider.
Final Note: Usage of public ip addresses without SSL to connect to Cloud SQL instances is not recommended for production instances.
This was my solution, and this way I just need to run: terraform apply :
// POSTGRESQL INSTANCE
resource "google_sql_database_instance" "my-database" {
database_version = "POSTGRES_9_6"
region = var.deployment_region
settings {
tier = var.db_machine_type
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "my_ip"
value = var.db_allowed_networks.my_network_ip
}
}
}
}
// DATABASE USER
resource "google_sql_user" "user" {
name = var.db_credentials.db_user
instance = google_sql_database_instance.my-database.name
password = var.db_credentials.db_password
depends_on = [
"google_sql_database_instance.my-database"
]
provisioner "local-exec" {
command = "psql postgresql://${google_sql_user.user.name}:${google_sql_user.user.password}#${google_sql_database_instance.my-database.public_ip_address}/postgres -c \"CREATE SCHEMA myschema;\""
}
}

How to load PostgreSQL data into GeoMesa (with the Cassandra datastore)?

I tried to load Postresql data into Geomesa (with a Cassandra datastore), by the JDBC Converter.
Loading from shape works fine, so the Cassandra and GeoMesa setup is okay
Next I tried to load data from PostgreSQL
Command:
echo "SELECT year, geom, grondgebruik, crop_code, crop_name, fieldid, global_id, area, perimeter, geohash FROM v_gewaspercelen2018" | bin/geomesa-cassandra ingest -c catalog -P cassandraserver:9042 -k agrodatacube -f parcel -C geomesa.converters.parcel -u -p
The converter definition file geomesa.converters.parcel looks like this:
geomesa.converters.parcel = {
type = "jdbc"
connection = "dbc:postgresql://postgresserver:5432/agrodatacube"
id-field="toString($5)"
fields = [
{ name = "fieldid", transform = "$5" }
{ name = "global_id", transform = "$6" }
{ name = "year", transform = "$0" }
{ name = "area", transform = "$7" }
{ name = "perimeter", transform = "$8" }
{ name = "grondgebruik", transform = "$2" }
{ name = "crop_code", transform = "$3" }
{ name = "crop_name", transform = "$4" }
{ name = "geohash", transform = "$9" }
{ name = "geom", transform = "$1" }
]
}
The geomesa output is:
INFO Schema 'parcel' exists
INFO Running ingestion in local mode
INFO Ingesting from stdin with 1 thread
[ ] 0% complete 0 i[ ] 0% complete 0 ingested 0 failed in 00:00:01
ERROR Fatal error running local ingest worker on <stdin>
[ ] 0% complete 0 i[ ] 0% complete 0 ingested 0 failed in 00:00:01
INFO Local ingestion complete in 00:00:01
INFO Ingested 0 features with no failures for file: <stdin>
WARN Some files caused errors, ingest counts may not be accurate
Does someone have a clue what is wrong here?
You can check in the logs folder for more detailed errors. However, just at a first glance, the JDBC converter follows standard result set numbering, meaning the first field is $1 (not $0). In addition, you may need to transform your geometry with a transform function, i.e. geometry($2).
Thanks Emilio, both suggestions made sence!
Made the converter field count start at 1
Inside the converter definition file changed
{ name = "geom", transform = "$2" }
into
{ name = "geom", transform = "geometry($2)" }
The SQL Select command should be:
SELECT year, ST_AsText(geom), .... FROM v_gewaspercelen2018
By the way, username and password are part of the connection-string (which is inside file geomesa.converters.parcel):
connection =
"dbc:postgresql://postgresserver:5432/agrodatacube?user=username&password=password"
So the -u and -p flags do not appear in the final command:
echo "SELECT year, ST_AsText(geom), grondgebruik, crop_code,
crop_name, fieldid, global_id, area, perimeter, geohash FROM
v_gewaspercelen2018" | bin/geomesa-cassandra ingest -c catalog -P
cassandraserver:9042 -k agrodatacube -f parcel -C
geomesa.converters.parcel
With these changes it works.
Thanks again!
Hugo