I am looking into importing my nodes using the EC2 plugin
My mapping is setup to import the key as one of the values, but I don't seem to be able to figure out how to concatenate the dynamic value coming from the node with the string that will represent the ssh key path. Effectively what I would like to achieve is something along those lines:
ssh-keypath.default=/path/to/key/directory/${keyName}.pem;
this, however sets my keypath to literal "/path/to/key/directory/${keyName}.pem"
I figured out how to do this:
In Mapping Params i set keyName.selector=keyName;
In Default Node Executor / SSH Key File path i can now set /path/to/keys/${node.keyName}.pem
This means that if I add all of my keys to /path/to/keys/ they will load dynamically as long as the keyName is correct.
Related
While checking the values of yaml files for a helm chart, one often encounters
changeme passed as a value. E.g.:
rabbitmq.conf: |-
##username and password
default_user={{.Values.rabbitmq.username}}
default_pass=CHANGEME
or:
config:
accumuloSite:
instance.secret: "changeme"
userManagement:
rootPassword: "changeme"
What is the meaning of "changeme"?
Is it just a word that needs to be replaced? If so, what will happen if it is not? A security hole, or hopefully an error?
Or is it a keyword that lets the system to replace this with a secure password? If so, how does the system know what type of password to produce?
In either case, how does the chart connect this value with other places where this value might be needed? ( e.g. if this is a password another -dependent to the first- service needs, how is the manually assigned / derived password propagated to the second service? )
(*mainly interested about helm v3 if this is important)
I'd almost always expect this to be just a placeholder that needs to be filled in. In many cases YAML can wind up having inconsistent types if a value is actually absent, so it can be useful to have some value in the chart's values.yaml, but for things like passwords there's not a "right default value" you could include.
Nothing will automatically replace these for you or warn if you're using the default values. Nothing bad will obviously happen if you do deploy with these values, but I'm sure changeme is up there with passw0rd on the short list of default passwords to try if you're actively trying to break into a system.
If you were writing your own chart, you could also test if a value is present using required and explain what's missing, and this approach might be more secure than having a well-known default password.
I want to create some servers on DigitalOcean using Pulumi. I have the following code:
for i in range(0, amount):
name = f"droplet-{i+1}"
droplet = digitalocean.Droplet(
name,
image=_image,
region=_region,
size=_size,
)
pulumi.export(f"droplet-ip-{i+1}", droplet.ipv4_address)
This is correctly outputting the IP address of the servers on the console.
However I would like to use the IP addresses elsewhere in my Python script. Therefor I had added the droplets to a list as follows:
droplets = []
for i in range(0, amount):
name = f"droplet-{i+1}"
droplet = digitalocean.Droplet(
name,
image=_image,
region=_region,
size=_size,
)
pulumi.export(f"droplet-ip-{i+1}", droplet.ipv4_address)
droplets.append(droplet)
to then loop over the droplets as follows:
for droplet in droplets:
print(droplet.ipv4_address)
In the Pulumi output, I see the following:
Diagnostics:
pulumi:pulumi:Stack (Pulumi_DigitalOcean-dev):
<pulumi.output.Output object at 0x105086b50>
<pulumi.output.Output object at 0x1050a5ac0>
I realize that while the droplets are still being created, the IP address is unknown but I'm adding the droplets to the list after the creation.
Is there a way to know the IP addresses at some point so it can be used elsewhere in the Python script.
The short answer is that because these values are Outputs, if you want the strings, you'll need to use .apply:
https://www.pulumi.com/docs/intro/concepts/inputs-outputs/#apply
To access the raw value of an output and transform that value into a new value, use apply. This method accepts a callback that will be invoked with the raw value, once that value is available.
You can print these IPs by iterating over the list and calling the apply method on the ipv4_address output value:
...
pulumi.export(f"droplet-ip-{i+1}", droplet.ipv4_address)
droplets.append(droplet)
...
for droplet in droplets:
droplet.ipv4_address.apply(lambda addr: print(addr))
$ pulumi up
...
Diagnostics:
pulumi:pulumi:Stack (so-71888481-dev):
143.110.157.64
137.184.92.205
Outputs:
droplet-ip-1: "137.184.92.205"
droplet-ip-2: "143.110.157.64"
Depending on how you plan to use these strings in your program, this particular may may not be perfect, but in general, if you want the unwrapped value of pulumi.Output, you'll need to use .apply().
The pulumi.Output.all() also comes in handy if you want to wait for several output values to resolve before using them:
https://www.pulumi.com/docs/intro/concepts/inputs-outputs/#all
If you have multiple outputs and need to join them, the all function acts like an apply over many resources. This function joins over an entire list of outputs. It waits for all of them to become available and then provides them to the supplied callback.
Hope that helps!
I'm having trouble accessing an array key in Fluid. The array key name is "common.title". How can I access the value? Escaping the dot is not working. I know it's not good to have a dot in a key name, but the values come from a different source. See the attached image for more information.
Did you try with escaping the . by using common\.title?
I have been using nsupdate for a long time in various scripts dealing with dynamic DNS zone updates without any issue. I have always used TSIG to authenticate the requests against the DNS server, where the keys have been generated by ddns-confgen. That means that I didn't use key pairs like those generated by dnssec-keygen; rather, the keys file format is like the following:
key "ddns-key.my.domain.example" {
algorithm hmac-sha512;
secret "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefijklmnopqrstuvwxyzabcdefghij==";
};
The respective zone configuration then contains:
update-policy {
grant ddns-key.my.domain.example name host.my.domain.example ANY;
};
Now I have a more complicated task and try to solve it by a Perl script. I have studied the documentation of Perl::DNS::Update, Perl::DNS and Perl::DNS:RR:TSIG and have studied a lot of examples which should make the usage of those modules clear.
However, each example I saw, when coming to signing a request via TSIG, used key files in the format dnssec-keygen produces, and not key files in the format I have. And indeed, something like
$o_Resolver = new Net::DNS::Resolver(nameservers => ['127.0.0.1']);
$o_Update = new Net::DNS::Update('my.domain.example', 'IN');
$o_Update -> push(update => rr_del('host A'));
$o_Update -> push(update => rr_add('host 1800 IN A 192.0.2.1'));
$o_Update -> sign_tsig('/etc/bind/ddns-key.my.domain.example.key');
$o_Reply = ($o_Resolver -> send($o_Update));
does not work, producing the following message:
TSIG: unable to sign packet at /path/to/script.pl line 240.
unknown type "ddns-key.my.domain.example" at /usr/local/share/perl/5.20.2/Net/DNS/RR.pm line 669.
file /etc/bind/ddns-key.my.domain.example.key line 1
at /usr/local/share/perl/5.20.2/Net/DNS/RR/TSIG.pm line 403.
TSIG: unable to sign packet at /path/to/script.pl line 240.
I suppose I now have two options: Either use keys in the format dnssec-keygen produces, which seem to be directly usable with Net::DNS and its friends, or construct the TSIG key manually as shown in the docs:
my $key_name = 'tsig-key';
my $key = 'awwLOtRfpGE+rRKF2+DEiw==';
my $tsig = new Net::DNS::RR("$key_name TSIG $key");
$tsig->fudge(60);
my $update = new Net::DNS::Update('example.com');
$update->push( update => rr_add('foo.example.com A 10.1.2.3') );
$update->push( additional => $tsig );
[Of course, I wouldn't hard-code the key in my Perl script, but read it from the key file instead.]
Switching to another key file format would mean changing the DNS server configuration, which is not an elegant solution. "Manually" reading the key files and then "manually" constructing the keys is not very satisfying either, hence the question:
Did I understand correctly that it is not possible to use key files in the ddns-confgen format directly with Net::DNS and its sub-modules to TSIG-sign DNS update requests?
I currently have a Talend job which reads from a context file and feeds into context variables. I have a field called ftppassword and store the hard coded password in the context file. I then have a context variable in the job and refer to that in my job.
With this setup my job runs fine but if I change the context file to contain a location to a password file instead of the hard coded password, I get the following exception:
Exception in component
tFTPConnection_1 com.enterprisedt.net.ftp.FTPException: 530 Login
incorrect. at
com.enterprisedt.net.ftp.FTPControlSocket.validateReply(FTPControlSocket
.java:1179) at
com.enterprisedt.net.ftp.FTPClient.password(FTPClient.java:1844) at
com.enterprisedt.net.ftp.FTPClient.login(FTPClient.java:1766) –
**Edit - 2014-12-08 ****
Output of context parameters:
Implicit_Context_Context set key "ftphost" with value "ftp.host.com"
Implicit_Context_Context set key "ftpport" with value "21"
Implicit_Context_Context set key "ftpusername" with value "myuser"
Implicit_Context_Context set key "ftppassword" with value "/opt/password_files/DW/test1.password"
Implicit_Context_Context set key "ftpremotepath" with value "/Output/"
Implicit_Context_Context set key "ftpfilemask" with value "test_dn.zip"
Have also tried changing the data type of ftppassword to File and Password but had no luck with that.
The implicit tContextLoad option on the job is the equivalent of putting a tFileInputDelimited component at the start of your job with a schema of 2 columns: key and value. This is then read into a tContextLoad (hence the option name) to load the contexts in your job.
If your password file isn't in a key-value format then you can't use it this way.
The simplest option is to stick with the way you had it working before and use an implicit tContextLoad to load a delimited file with key-value pairs of your context variables.
Another option would be to no longer do this using the implicit tContextLoad option and instead to do it explicitly.
To do this you'd want to read in your password file using an appropriate connector such as a tFileInputDelimited. If you were reading in something that looked like /etc/passwd then you could split it on : to get:
username
password
user id
group id
user id info
home directory
shell location
You could then use a tMap to populate an output schema of:
key
value
You would then enter "ftppassword" as the key and connect the password value to the value column. You'll also want to filter this record set so you only get one password being set so you might want to use something like "ftpUser".equals(row1.username) in the expression filter of your output table in the tMap.
Then just connect this to a tContextLoad component and your job should load the password from /etc/passwd for the "ftpUser" user account.
If you are looking to pass a file path to another file containing the password so that you can split the dependencies and allow one file to contain all the other contexts for the job but to keep the password file elsewhere then instead you'd want to pass a context variable pointing to the password file but then you'd have to explicitly consume it in the job.
In this case you may have a context file that is loaded at run time with contexts such as ftpremotepath, ftphost and ftpfilemask that can be set directly in the file and then a ftpusercredentials context variable that is a file path to a separate credentials file.
This file could then be another delimited file containing key-value pairs of context name and value such as:
ftpuser,myuser
ftppasswd,p4ssw0rd
Then at the start of your job you would explicitly read this in using a tFileInputDelimited component with a schema of 2 columns: key and value. You could then connect this to a tContextLoad component and this will load the second set of context variables into memory as well.
You could then use these as normal by referring to them as context.ftpuser and context.ftppasswd.