Installing a dependent module that my team wrote - powershell

How do I tell DSC that a resource/module from our internal code repository (not on a private Gallery feed), first?
Do I just use a basic Script Resource and bring the files down (somehow) into $PSModulePath and import them?
Update
There's a CmdLet called Get-DSCResource that will list the available resources on a system, i.e. that reside in the correct path(s), and provide some information that can be used with Import-DscResource, which is a 'dynamic keyword' that is placed within a Configuration block in a DSC script to declare the dependencies.
As for getting the resources/modules down to the target system, I'm not sure yet.

If you are using a dsc pull server then you just need to make sure that your custom module(s) are on that server. I usually put them in program files\windowspowershell\modules.
In the configuration you can just specify that you want to import your custom module and then proceed with the custom dsc resource
Configuration myconfig {
Import-DSCResource customModule
Node somenode {
customresource somename {
}
}
}
If you dont have a pull server and you want to push configurations then you have to make sure that your custom modules are on all target systems. you can use the DSC file resource to copy the modules or maybe just use a ps script or any other means to copy them and then use DSC for your custom configurations.

Related

Where is a file created via Terraform code stored in Terraform Cloud?

I've been using Terraform for some time but I'm new to Terraform Cloud. I have a piece of code that if you run it locally it will create a .tf file under a folder that I tell him but if I run it with Terraform CLI on Terraform cloud this won't happen. I'll show it to you so it will be more clear for everyone.
resource "genesyscloud_tf_export" "export" {
directory = "../Folder/"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
So basically when I launch this code with terraform apply in local, it creates a .tf file with everything I need. Where? It goes up one folder and under the folder "Folder" it will store this file.
But when I execute the same code on Terraform Cloud obviously this won't happen. Does any of you have any workaround with this kind of troubles? How can I manage to store this file for example in a github repo when executing github actions? Thanks beforehand
The Terraform Cloud remote execution environment has an ephemeral filesystem that is discarded after a run is complete. Any files you instruct Terraform to create there during the run will therefore be lost after the run is complete.
If you want to make use of this information after the run is complete then you will need to arrange to either store it somewhere else (using additional resources that will write the data to somewhere like Amazon S3) or export the relevant information as root module output values so you can access it via Terraform Cloud's API or UI.
I'm not familiar with genesyscloud_tf_export, but from its documentation it sounds like it will create either one or two files in the given directory:
genesyscloud.tf or genesyscloud.tf.json, depending on whether you set export_as_hcl. (You did, so I assume it'll generate genesyscloud.tf.
terraform.tfstate if you set include_state_file. (You didn't, so I assume that file isn't important in your case.
Based on that, I think you could use the hashicorp/local provider's local_file data source to read the generated file into memory once the MyPureCloud/genesyscloud provider has created it, like this:
resource "genesyscloud_tf_export" "export" {
directory = "../Folder"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
data "local_file" "export_config" {
filename = "${genesyscloud_tf_export.export.directory}/genesyscloud.tf"
}
You can then refer to data.local_file.export_config.content to obtain the content of the file elsewhere in your module and declare that it should be written into some other location that will persist after your run is complete.
This genesyscloud_tf_export resource type seems unusual in that it modifies data on local disk and so its result presumably can't survive from one run to the next in Terraform Cloud. There might therefore be some problems on the next run if Terraform thinks that genesyscloud_tf_export.export.directory still exists but the files on disk don't, but hopefully the developers of this provider have accounted for that somehow in the provider logic.

Azure batch Application package not getting copied to Working Directory of Task

I have created Azure Batch pool with Linux Machine and specified Application Package for the Pool.
My command line is
command='python $AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py',
python3: can't open file '$AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py':
[Errno 2] No such file or directory
when i connect to node and look at working directory non of the Application Package files are present there.
How do i make sure that files from Application Package are available in working directory or I can invoke/execute files under Application Package from command line ?
Make sure that your async operation have proper await in place before you start using the package in your code.
Also please share your design \ pseudo-code scenario and how you are approaching it as a design?
Further to add:
Seems like this one is pool level package.
The error seems like that the application env variable is either incorrectly used or there is some other user level issue. Please checkout linmk below and specially the section where use of env variable is mentioned.
This seems like user level issue because In case of downloading the package resource, if there will be an error it will be visible to you via exception handler or at the tool level is you are using batch explorer \ Batch-labs or code level exception handling.
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Reason \ Rationale:
If the pool level or the task application has error, an error-list will come back if there was an error in the application package then it will be returned as the UserError or and AppPackageError which will be visible in the exception handle of the code.
Key you can always RDP into your node and checkout the package availability: information here: https://learn.microsoft.com/en-us/azure/batch/batch-api-basics#connecting-to-compute-nodes
I once created a small sample to help peeps around so this resource might help you to checkeout the use here.
Hope rest helps.
On Linux, the application package with version string is formatted as:
AZ_BATCH_APP_PACKAGE_{0}_{1}
On Windows it is formatted as:
AZ_BATCH_APP_PACKAGE_APPLICATIONID#version
Where 0 is the application name and 1 is the version.
$AZ_BATCH_APP_PACKAGE_scriptv1_1 will take you to the root folder where the application was unzipped.
Does this "exact" path exist in that location?
tasks/XXX/get_XXXXX_data.py
You can see more information here:
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Edit: Just saw this question: "or can I invoke/execute files under Application Package from command line"
Yes you can invoke and execute files from the application package directory with the environment variable above.
If you type env on the node you will see the environment variables that have been set.

DSC - alter MOF after compiling it

Is it possible to alter a MOF file in DSC after compiling it? I'm trying to have a generic MOF in a pull server and from a client ask for that MOF with specific parameters.
Thanks!
The MOF is just a text file; you could modify it yourself.
But there's no provision in the pull server that would take a parameter and modify the requested MOF on the fly. Additionally you'd have to recalculate the checksum if you did modify it.
What exactly are you trying to do that this would be necessary?
The Guid is used to identify the nodes. If you share the Guid across multiple nodes then it is difficult to identify which node hasn't gotten an update (using the status end point). The correct way of doing it is to use configuration data and compile for the individual nodes. Here is a link to the blog on how to do the same https://blogs.msdn.microsoft.com/powershell/2014/01/09/separating-what-from-where-in-powershell-dsc/

How do I use Puppet's ralsh with resource types provided by modules?

I have installed the postgresql module from Puppetforge.
How can I query Postgresql resources using ralsh ?
None of the following works:
# ralsh postgresql::db
# ralsh puppetlabs/postgresql::db
# ralsh puppetlabs-postgresql::db
I was hoping to use this to get a list of databases (including attributes such as character sets) and user names/passwords from the current system in a form that I can paste into a puppet manifest to recreate that setup on a different machine.
In principle, any puppet client gets the current state of your system from another program called Facter. You should create a custom Fact (a module of Facter), and then included into your puppet client. Afterwards, I think you could call this custom Fact from ralsh.
More information about creating a custom Fact can be found in here.
In creating your own Fact, you should execute your SQL query and then save the result into particular variable.

Multiple installations of my app - how do I handle it

I have an app written in PHP, MySQL, etc. The app has a few dependencies such as beanstalkd, Solr and a few PHP extensions.
For each customer we have a separate installation of the app, either on a server shared with other customers or on a server with only that customer.
For now we're using a Puppet script to bootstrap new customers and then we manually go to each customer to make a git pull, update the db, etc., whenever something changes.
What we're looking for is really a tool that has as many of the following features as possible:
Web interface that allows us to see all customers and their current revision
Ability to bootstrap new installations
Ability to update existing installations to a specific revision or branch
We're not looking for a tool to bootstrap new servers - we still do that manually. Instead we're looking for a way to automate the setup of clients on an existing server.
Would Chef or Puppet be sufficient for this, is there a more suitable tool, or would you recommend rolling something ourselves?
I'm a full time developer working on Puppet at Puppet Labs. I'm also the co-author of Pro Puppet.
Puppet is certainly sufficient for your goals. Here's one way to solve this problem using Puppet. First, I'll address the dependency management since these should only be managed once regardless of how many instances of the application are being managed. Then, I'll address how to handle multiple installations of your app using a defined resource type in Puppet and the vcsrepo resource type.
First, regarding the organization of the puppet code to handle multiple installations of the same app. The dependencies you mention such as beanstalkd, solr, and the PHP extensions should be modeled using a Puppet class. This class will be included in the configuration catalog only once, regardless of how many copies of the application are managed on the node. An example of this class might be something like:
# <modulepath>/site/manifests/app_dependencies.pp
class site::app_dependencies {
# Make all package resources in this class default to
# being managed as installed on the node
Package { ensure => installed }
# Now manage the dependencies
package { 'php': }
package { 'solr': }
package { 'beanstalk': }
# The beanstalk worker queue service needs to be running
service { 'beanstalkd':
ensure => running,
require => Package['beanstalk'],
}
}
Now that you have your dependencies in a class, you can simply include this class on the nodes where your application will be deployed. This usually happens in the site.pp file or in the Puppet Dashboard if you're using the web interface.
# $(puppet config print confdir)/manifests/site.pp
node www01 {
include site::app_dependencies
}
Next, you need a way to declare multiple instances of the application on the system. Unfortunately, there's not an easy way to do this from a web interface right now but it is possible using Puppet manifests and a defined resource type. This solution uses the vcsrepo resource to manage the git repository checkout for the application.
# <modulepath>/myapp/manifests/instance.pp
define myapp::instance($git_rev='master') {
# Resource defaults. The owner and group might be the web
# service account instead of the root account.
File {
owner => 0,
group => 0,
mode => 0644,
}
# Create a directory for the app. The resource title will be copied
# into the $name variable when this resource is declared in Puppet
file { "/var/lib/myapp/${name}":
ensure => directory
}
# Check out the GIT repository at a specific version
vcsrepo { "/var/lib/myapp/${name}/working_copy":
ensure => present,
provider => git,
source => 'git://github.com/puppetlabs/facter.git',
revision => $git_rev,
}
}
With this defined resource type, you can declare multiple installations of your application like so:
# $(puppet config print confdir)/manifests/site.pp
node www01 {
include site::app_dependencies
# Our app instances always need their dependencies to be managed first.
Myapp::Instance { require => Class['site::app_dependencies'] }
# Multiple instances of the application
myapp::instance { 'jeff.acme.com': git_rev => 'tags/1.0.0' }
myapp::instance { 'josh.acme.com': git_rev => 'tags/1.0.2' }
myapp::instance { 'luke.acme.com': git_rev => 'tags/1.1.0' }
myapp::instance { 'teyo.acme.com': git_rev => 'master' }
}
Unfortunately, there's not currently an easy to use out of the box way to make this information visible from a web GUI. It is certainly possible to do, however, using the External Node Classifier API. For more information about pulling external data into Puppet please see these resources:
External Nodes Documentation
R.I. Pienaar's Hiera (Hierarchical data store)
Hope this information helps.