Is it possible to load a user command from an in-memory namespace? - dyalog

From what I can tell, user commands can only be loaded from namespace scripts located in the directories specified by the SALT cmddir setting.
But I have an interest in loading a user command directly from an in-memory namespace, without ever having a namespace script reside on a locally accessible disk.
An example use case might be loading a namespace that defines one or more user commands from a remote repository via ]get, and then "installing" the user commands into the workspace directly from memory.
Is this possible?

Bad news: No, you cannot currently do that.
Good news: I'm working on a rewrite of the user command system which makes this trivial to do.
Source: I'm in charge of the user command system at Dyalog.

Related

Some environment variables not as expected when running under Service Fabric

When running a guest executable in Service Fabric I have noticed that some environment variables do not seem to be mapped to where I would expect them to be.
A few examples of these are that %appdata% didnt resolve to the usual:
C:\Users\\AppData\Roaming
but instead resolved to somewhere deep inside C:/windows
I have also noticed that when running applications using Erlang that the '.erlang.cookie' file is usually placed in the user root:
C:\Users\.erlang.cookie
but instead is trying to be created in C:\Windows
Is there a reason to why these are changed in these ways and currently I am having to make the guest executable not use 'appdata' and grant it administrative privileges using a policy in the application manifest to give it write access to C:\windows to write the '.erlang.cookie'.
That's because services run under the NETWORKSERVICE account by default.
You can do runas, or use the configuration system for settings.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-runas-security
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-configuration

Does Chef powershell_script have limited privileges?

I am encountering several situations where, in a Chef recipe with powershell_scipt, a command appears to fail, whereas if I run the same command in powershell outside of Chef, the same command works.
The two in particular are "regedit", which I am trying to use to set a key for app compatibility and the other is "net use z:...." to created a mapped drive. Both of these seem to work fine if I run them in powershell, but if I use them inside a recipe inside powershell_script, they don't appear to do anything.
So I'm wondering is this because Chef runs commands that are inside powershell_script at some lower privilege level?
Also if so, how do I change it so that the regedit and net use would work?
Thanks,
Jim
EDIT 1: This seems to work for adding the registry entry I needed:
registry_key "HKEY_CURRENT_USER\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags" do
values [{
:name => "{2b9034f3-b661-4d36-a5ef-60ab5a711ace}",
:type => :dword,
:data => 00000004
}]
action :create
end
That prevents the compatability popup that I am getting when we run the Sharepoint installer.
EDIT 2: I hope that this is ok, but for the record and more visibility and hope that I remember this, I found this re. mapping drives in Windows and Chef:
Mount windows shares on a windows node with Chef
and:
https://tickets.opscode.com/browse/CHEF-1267
I haven't tried that yet, but that seems like the answer to my drive mapping need.... hopefully..
The chef client service runs as Local System (SYSTEM) by default.
In Windows, that user has full privileges on the local system, like root basically, but on the network it authenticates as the computer object.
So it you are trying to use regedit to change something in for example HKEY_CURRENT_USER then you need to remember that the code will not see the same "current user" as you will when you run it in interactively. Also, regedit is an .exe; you should really do what you need through the PowerShell providers or .Net objects.
For net use you are trying to map a drive. It's likely that the computer account doesn't have the rights to the share that your user has. Again, net.exe is a separate executable. net use maps a drive to a drive letter (usually) and you shouldn't be doing that in a configuration script, in my opinion. You should access the UNC path directly, but either way I still think that you're probably running into a permissions issue here.
You could change the credentials of the service to use a user account that has all the rights you want, but before doing something like that you should consider changing your workflow to not need that.

Docker and sensitive information used at run-time

We are dockerizing an application (written in Node.js) that will need to access some sensitive data at run-time (API tokens for different services) and I can't find any recommended approach to deal with that.
Some information:
The sensitive information is not in our codebase, but it's kept on another repository in encrypted format.
On our current deployment, without Docker, we update the codebase with git, and then we manually copy the sensitive information via SSH.
The docker images will be stored in a private, self-hosted registry
I can think of some different approaches, but all of them have some drawbacks:
Include the sensitive information in the Docker images at build time. This is certainly the easiest one; however, it makes them available to anyone with access to the image (I don't know if we should trust the registry that much).
Like 1, but having the credentials in a data-only image.
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now. This is very convenient too, but then we can't spin up new servers easily (maybe we could use something like etcd to synchronize them?)
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
PS: I've done some research but couldn't find anything similar to my problem. Other questions (like this one) were about sensitive information needed at build-time; in our case, we need the information at run-time
I've used your options 3 and 4 to solve this in the past. To rephrase/elaborate:
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now.
I use config management (Chef or Ansible) to set up the credentials on the host. If the app takes a config file needing API tokens or database credentials, I use config management to create that file from a template. Chef can read the credentials from encrypted data bag or attributes, set up the files on the host, then start the container with a volume just like you describe.
Note that in the container you may need a wrapper to run the app. The wrapper copies the config file from whatever the volume is mounted to wherever the application expects it, then starts the app.
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
Yes, it's cumbersome to pass a bunch of env variables using -e key=value syntax, but this is how I prefer to do it. Remember the variables are still exposed to anyone with access to the Docker daemon. If your docker run command is composed programmatically it's easier.
If not, use the --env-file flag as discussed here in the Docker docs. You create a file with key=value pairs, then run a container using that file.
$ cat >> myenv << END
FOO=BAR
BAR=BAZ
END
$ docker run --env-file myenv
That myenv file can be created using chef/config management as described above.
If you're hosting on AWS you can leverage KMS here. Keep either the env file or the config file (that is passed to the container in a volume) encrypted via KMS. In the container, use a wrapper script to call out to KMS, decrypt the file, move it in to place and start the app. This way the config data is not exposed on disk.

How to push an enterprise-wide PowerShell profile customization?

As I've explained in my other question, I'm busy setting up a PowerShell module repository in my enterprise.
My plan is to have a master repository (r/w access to a limited group of people) and slave repositories (read only access to everyone). I need multiple repositories because clients are located in different security zones and I can't have a central location reachable by all clients.
For this reason, I need to configure the PowerShell profile of the clients so that they can point to the correct repository to find the modules. I would like to define a $PowerShellRepositoryPath environment variable for this purpose.
Also, the profile needs to be customized in order for it to execute a script located in the repository (thus where $PowerShellRepositoryPath points to) when PowerShell starts (my goal here is to automatically add the latest module versions to the PSModulePath of the clients on startup).
We have a mixed environment with domain members and stand-alone servers in different network zones.
How would you proceed? Is it possible to push that variable and the profile via a GPO for domain members? Would customizing the $Profile variable via GPO be an option?
What about the standalone servers?
Edit:
I think that for creating the environment variable, I'll just use a GPO to create it and use it in PowerShell via $env:variableName. For non-domain situations, I'll probably have to use a script though..
I am not sure about pushing $profile via GPO. But, I'd simply put a logon script that copies the profile script from a network location based on the user's group/security membership.
Well if you're going to change the path to the modules, I'd have a file in the repository (say current.txt) that has the name for the current module (or current file path, whichever you are changing) in it. Then have the $profile script read the content of that file, and set the variable based on the contents. This way you don't have to screw around with updating the profile scripts, just update the central repository current.txt file with the path (or partial path, the part that changes, or filename or whatever), and when it replicates to the client repositories, all powershell profiles get updated with the latest modules when the profile script is executed.
Out of curiosity, why not just overwrite the module files in the client repositories with the latest version? If you did it that way, all clients would always have the latest versions, and you wouldn't have to update the $profile scripts.
Alternately you could always write another script to replace the $profile script on all machines. I think the first route I suggested would be the cleanest way of doing what you are after.
As far as the GPO thing goes, I don't believe you can do this. There is no GPO defined to control what is in the profile script. I would say you could maybe do it with a custom ADM file, but the profile script path is not controlled by the registry, so no go there.

Design Advise: Sending signals to daemons through HTTP

I'm using Apache on Ubuntu. I have a Perl script which basically read the files names of a directory, then rewrites a text file, then sends a signal to a daemon. How can this be done, as secure as possible through a web-page?
Actually I can run a simplified cgi in the code below, but not if I remove the comments. I'm looking for advise considering any of:
Using HTTP Requests?
How about Apache file permissions on the directory shown in code?
Is htaccess enough to enable user/pass access to the cgi?
Should I use a database instead of writing to a file and run a cron querying the db with permission granted to write and send the signal?
Granting as less permissions as possible to the webserver.
Should I set a VPN?
#!/usr/bin/perl -wT
use strict;
use CGI;
##fileList = </home/user/*>; #read a directory listing
my $query = CGI->new();
print $query->header( "text/html" ),
$query->p( "FirstFileNameInArray" ),
#$query->p( $fileList[0] ), #output the first file in directory
$query->end_html;
Presumably, the error you're getting from the commented lines is a permission denied when trying to read the /home/user directory. The way to fix this is (surprise, surprise) to give the apache user[1] to read that directory. There are three primary approaches to doing this:
In most environments, there's
really no good reason to hide all
filenames within a user's home
directory, so you could make the
directory world-readable with chmod
a+r /home/user. Unless you have a
specific reason to prevent the
general public from knowing the
names of the files in the user's
home directory, I'd tend to
recommend this approach.
If you want to be a bit more
restrictive about it, you could
change /home/user to be owned by a
group which the apache user belongs
to (or add the apache user to the
group that currently owns
/home/user) and then set
/home/user to be group-readable.
This will make it accessible to all
members of that group, but not the
general public.
If you need to have standard
filesystem permissions applied to
web access, you can look at
configuring suexec so that
individual requests can take on
permissions of users other than the
apache user. This is normally the
user who owns the code which is
being run to handle the request
(e.g., in this case, the user who
owns your directory-listing script),
but, if you're using htaccess-based
authentication, it may be possible
to configure suexec to decide
which user's permissions to take on
based on what user you log in as.
(I avoid suexec myself, so I'm not
100% certain if this can be done and
have no idea how to go about it if
it can.)
[1] ...by which I mean the user that apache is running as; depending on your system config, this user may be named "apache", "httpd", "nobody", "www-data", or something else entirely.