Need an opinion on a method for pull data from a file with Perl - perl

I am having a conflict of ideas with a script I am working on. The conflict is I have to read a bunch of lines of code from a VMware file. As of now I just use SSH to probe every file for each virtual machine while the file stays on the server. The reason I am now thinking this is a problem is because I have 10 virtual machines and about 4 files that I probe for filepaths and such. This opens a new SSH channel every time I refer to the ssh object I have created using Net::OpenSSH. When all is said and done I have probably opened about 16-20 ssh objects. Would it just be easier in a lot of ways if I SCP'd the files over to the machine that needs to process them and then have most of the work done on the local side. The script I am making is a backup script for ESXi and it will end up storing the files anyway, the ones that I need to read from.
Any opinion would be most helpful.

If the VM's do the work locally, it's probably better in the long run.
In the short term, the ~equal amount of resources will be used, but if you were to migrate these instances to other hardware, then of course you'd see gains from the processing distribution.
Also from a maintenance perspective, it's probably more convenient for each VM to host the local process, since I'd imagine that if you need to tweak it for a specific box, it would make more sense to keep it there.
Aside from the scalability benefits, there isn't really any other pros/cons.

Related

usb local and remote port configuration via cmd batch

(xpost from superuser with no answers.)
I am trying to reconfigure a known (virtual?) com port on multiple computers on a local network using a batch file.
A USB device we use is installed always as com9 and always comes in as default 9600 baud, and we have to manually reconfigure each station to 57600 baud.
I already have this batch file renaming printers, dns servers, Killing and starting tasks, copying files and a whole lot more, I've experimented with mode, but I'm either not using it properly or it can't do what I want.
I know I can use the GUI, but for the sake of speed, I want the batch to do it.
Sorry if this is a copy, but I'm seeing if anyone has an angle for me, I'm not afraid of personal research, but I'm running into dead ends with no leads.
Ask if you need any clarifications, and thanks in advance.
Powershell is okay too if I know what I need and can still stay in the cmd environment.

How to run powershell script remotely using chef?

I have powershell script which is present on chef server to run on remote windows server, how can i run this powershell script from chef server on remote windows server.
Chef doesn't do anything like this. First, Chef Server can never remotely access servers directly, all it does is stores data. Second, Chef doesn't really do "run a thing in a place right now". We offer workstation tools like knife ssh and knife winrm as simplistic wrappers but they aren't made for anything complex. The Chef-y way to do this would be to make a recipe and run your script using the the powershell_script resource.
Does it mean chef is also running on Windows server ?
If yes, why not to use psexec from Windows Ps tools ?
https://learn.microsoft.com/en-us/sysinternals/downloads/psexec
Here is my understanding of what you are trying to achieve. If I'm wrong then please correct me in a comment and I will update my answer.
You have a powershell script that you need to run on a specific server or set of servers.
It would be convenient to have a central management solution for running this script instead of logging into each server and running it manually.
Ergo you either need to run this script in many places when a condition isn't filled, such as a file is missing, or you need to run this script often, or you need this script to be run with a certain timing in regards to other processes you have going on.
Without knowing precisely what you're trying to achieve with your script the best solution I know of is to write a cookbook and do one of the following
If your script is complex place it in your cookbook/files folder (assuming the script will be identical on all computers it runs on) or in your cookbook/templates folder (if you will need to inject information into it at write time). You can then write the .ps file to the local computer during a Chef converge with the following code snippet. After you write it to disk you will also have to call it with one of the commands in the next bullet.
Monomorphic file:
cookbook_file '<destination>' do
source '<filename.ps>'
<other options>
end
Options can be found at https://docs.chef.io/resource_cookbook_file.html
Polymorphic file:
template '<destination>' do
source '<template.ps.erb>'
variables {<hash of variables and values>}
<other options>
end
Options can be found at https://docs.chef.io/resource_template.html
If your script is a simple one-liner you can instead use powershell_script, powershell_out! or execute. powershell_out! has all the same options and features as the shell_out! command and the added advantage that your converge will pause until it receives an exit status for the command, if that is desirable. The documentation on using it is a bit more spotty though so spend time experimenting with it and googling.
https://docs.chef.io/resource_powershell_script.html
https://docs.chef.io/resource_execute.html
Which ever option you end up going with you will probably want to guard your resource with conditions on when it should not run, such as when a file already exists, a registry key is set or what ever else your script changes that you can use. If you truly want the script to execute every single converge then you can skip this step, but that is a code smell and I urge you to reconsider your plans.
https://docs.chef.io/resource_common.html#guards
It's important to note that this is not an exhaustive list of how to run a powershell script on your nodes, just a collection of common patterns I've seen.
Hope this helped.

Two master instances on same database

I want to use Postgresql in Windows Server 2012 R2 for one our project where it can be 24/7 uptime.
I would like to ask the community if I can have 2 master instances in 2 different servers A&B and they will 'work' on the same DB located in a shared file storage in lan. Always one master instance on server A will be online and when it goes offline for some reason (I suppose) a powershell script will recognize that the postgresql service stopped and will start the service in server B. The same script will continuous check that only one service in servers A & B is working to avoid conflicts.
I'd like to ask if this is possible or a better approach for my configuration.
(I can't use replication because when server A shuts down the server B is in read-only mode thing that I don't want)
If you manage to start two instances of PostgreSQL on the same data directory, serious data corruption will happen.
Normally there is a postmaster.pid file that prevents that, but a PostgreSQL server process on a different machine that accesses the same file system will happily unlink that after spewing some log messages, thinking it was left behind from a crash.
So you are really walking on thin ice with a solution like that.
One other issue that you didn't think of is that script that is supposed to check if the server is still running. What if that script fails, because for example the network connection between the two servers is down, but the server is still up an running happily? Such a “split brain” scenario will cause data corruption with your setup.
Another word of caution: since you seem to be using Windows (Powershell?), you probably envision a CIFS file system when you are talking of shared storage. A Windows “network share” is not a reliable file system — last time I checked, it did not honor _commit.
Creating a reliable failover cluster is harder than you think, and I'd recommend that you check existing solutions before you try to roll your own.

Copying updated Files between Networks

Is there a way that I can copy updated files only from one network to another? The networks aren't connected in anyway, so the transfer will need to go via CD (or USB, etc.)
I've had a look at things like rsync, but that seems to require the two networks to be connected.
I am copying from a Windows machine, but it's going onto a network with both Windows and Linux machines (although Windows is preferable due to the way the network is set up).
you can rsync from source to the use-drive, use the usb-drive as a buffer, and then rsync from the usb-drive to the target. to benefit from the rsync algorithm and reduce the amount of copied data you need to keep the data on the usb-drive between runs.

Emacs-client - whats the minimal installation?

Lets say I have an Emacs-Server running on some remote server, with all the libraries and software necessary for running my application.
Then I want several clients to connect to that remote machine, using Emacs-client. Does each client need a full Emacs installation, or is there a minimal installation that is just enough to communicate with the remote server, where all the action is?
Could this (Emacs-)client installation be so minimal, that almost all software-updates can be done on the server, without affecting the Emacs-clients?
Is there a reason not to run the clients remotely as well, and simply use a local display? That way, pretty much all you need on the local machines is the ssh client and the X Window server.
ssh -X (user)#(server) "emacsclient -c"
Edits for the comments:
This command starts a new client to connect to an existing Emacs server (which it assumes is already running). You can use "emacsclient -a '' -c" to automatically start emacs --daemon if there is no existing server, but I don't know whether you want the connecting user to be starting the server.
In fact, I'm pretty unsure about the whole multi-user side of this to be honest, as I've never done that before. Authentication for the above is handled by ssh, but there may well be subsequent permission issues to deal with, or similar, when the server and the clients are started by different users.
This approach should be possible with Windows/Cygwin as client and/or server, as Cygwin provides Emacs, OpenSSH, and X.org packages. (I regularly use Windows/Cygwin as a local display for Emacs running on Linux.) It may be harder to set up, though, and any permissions issues are probably different when you're using Cygwin.
I'm less sure how this would work without Cygwin. NTEmacs certainly won't talk to X.org, so I imagine you'd be terminal based in that instance. (There are probably other options, but Cygwin sounds to me like the best-integrated approach to using all of Emacs, SSH, and X on Windows).
Lastly, I imagine you're probably getting your "Connection refused" error because localhost is not running a sshd daemon? I would say that configuration of ssh is outside the scope of this question, but there are lots of resources online for that.
Depending on what you're trying to achieve, you may be able to use a combination of Emacs and Screen. By starting up Emacs from Screen on the remote machine and detaching from it, you can subsequently re-attach from a different machine that doesn't have Emacs. Again, whether this will work for you or not depends on what you're trying to do; however, for many Emacs use-cases, this can be very effective. If you're not familiar with using Screen in this manner, here is some reading material:
screen - The Terminal Multiplexer
I am not sure that would be possible. emacsclient uses tramp to connect to a remote server, and just by looking at the number of requires in the tramp elisp files (41) it seems very unlikely. You can try it yourself with the following:
zgrep -oE "\(require '[a-z-]+\)" *el.gz | sed -e 's%[a-z0-9-]\+\.el\.gz:%%g' | sort | uniq -cu | wc -l
I'm not an expert in emacsclient, but I don't think is was designed to do what you're looking for. I think the general use case is that emacsclient allows you to redirect new requests to open a file with emacs to a persistent emacs process to avoid what may be a bit of an overhead in startup time. You seem to be looking for more of a true client/server relationship.
I think to meet the goal you're aiming at you'll probably need to look a little outside emacs, probably a project unto itself - 'emacsRemoteClient. It boils down to one or two models; the file you want to edit would need to have it's path sent over to the server machine so that emacs could do some sort of remote tramp access & then spawn the xwindow locally (using the local X env or requiring an x server on windows)... or two, transferring the file to some temp location on the server box and again spawning the remote x window locally (followed by syncing the changes between the tmp & local file).
Would be cool to have something like that... but suspecting it'll involve a bit of work. Maybe we just need a version of emacs written in javascript and it can live in the cloud or on your browser... oh to have emacs keybindings in the browser ;-)
-Steve