Schedule script to attach CSV file report to a data source in servicenow - scheduled-tasks

Schedule script to attach CSV file report to a data source in servicenow.
Schedule script that automatically attach csv file to the data source in servicenow.
how can we achieve this scenario?

Well this can be achieved in multiple ways. Bit of a vague description you have there. So I'll just drop a few general ideas for you:
If you don't mind turning things around, you could consider an external program push the file directly to ServiceNow and then run the associated TransformMap:
https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/import-sets/task/t_PostCSVOrExcelFilesToImportSet.html
If you have an FTP, you can have a scheduled script that will fetch the file from the FTP and run the transform:
https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/import-sets/task/t_ScheduleADataImport.html
You could use the MID Server application to have your custom logic of retrieving the file data. This is probably most complex to set up but also giving you the biggest advantages, like having your file encrypted etc. Basically, MID Server checks every couple seconds for a piece of code to be executed (called probes), for example you could trigger some Powershell script sitting on your server with it.
I'm sure there's other options as well. Good luck!

Related

POST an HTML form to a Powershell script

I just need a plain static .html page form, to POST to a Powershell script.
I've seen plenty of Powershell Invoke-WebRequest cmdlet material, but where Powershell is always initiating the HTTP request (and then handling the HTTP response..)
Thank you!
The short answer is that you cannot POST directly to a PowerShell script. When you POST to a website you are passing arguments to the web server that are then being presented to code on that web server ( the target of your POST request ) that the web server is capable of executing. Web servers do not understand PowerShell ( unless Microsoft has implemented this, which a few quick googles suggests they haven't ).
That being said, your ultimate goal is likely that you want to consume data that you sourced from a form via a PowerShell script. You will need to implement a backend on the webserver to consume the POST request and pass it to the operating system level to be run via PowerShell. This is generally not a good idea but if you are doing it for an internal site to get something running quickly then so be it.
Here is the process to call a Powershell script from ASP.Net: http://jeffmurr.com/blog/?p=142
You could approach this problem in many other ways. You could write your backend site to save the data from the POST request to a file and come along and parse that file on a schedule with PowerShell. You could use a database in the same manor or you could create a trigger in the database to run the script each time a row is appended.
I suspect that if you work down one of these pathways you will ultimately find that the technology you are using on the backend ( like ASP.Net or PHP or JavaScript ) is capable of doing the work you need done and that you would have far less moving parts if you stuck with one of those. Don't be afraid to learn something new. Jumping to JavaScript from PowerShell is not that difficult.
And the world moves to fast. Here is a NodeJS-like implementation of a webserver in PowerShell.
https://gallery.technet.microsoft.com/scriptcenter/Powershell-Webserver-74dcf466

How to periodically update a table in Postgresql via data retrieved from a php API using cronjob?

I have a database in PostgreSQL in which few tables are supposed to be regularly updated. The data is retrieved from an external API written in PHP.
Basically the idea is to update a table related to meteo data everyday by the data collected from a meteo station. My primary idea is to do this job by using cron which will automatically update the data. In this case I probably need to write a cronjob in the form of a script and then run it in the server.
Being a newbie I find it little difficult to deal with. Please suggest me the best approach.
This works pretty much as you described and does not get any more simple.
You have to:
Write a client script (possibly in PHP) that will pull data from the remote API. You can use CURL extension or whatever you like.
Make the client script update the tables. Consider saving history, not just overwriting current values.
Make the client script log it's operation properly. You will need to know how it does once deployed to production.
Test that your script successfully runs on server.
Add (or ask your server admin to add) a line to the crontab that will execute your script.
PROFIT! :)
Good luck!

importing updated files into a database

I have files that are updated every 2 hours. I have to detect the files automatically and insert the extracted information from them into a database.
Our DBMS is Postgresql and programming language is Python. How would you suggest I do that?
I want to make use of DAL (Database Abstraction Layer) to make connection between the files and database and use postgresql LISTEN/NOTIFY techniques to detect the new files. If you agree with me please tell me how I can use LISTEN/NOTIFY functions to detect the files.
Thank you
What you need is to write a script that stays running as a dæmon, using a file system notify API to run a callback function when the files change. When the script is notified that the files change it should connect to PostgreSQL and do the required work, then go back to sleep waiting for the next change.
The only truly cross platform way to watch a directory for changes is to use a delay loop to poll os.listdir and os.stat to check for new files and updated modification times. This is a waste of power and disk I/O; it also gets slow for big sets of files. If your OS reliably changes the directory modification time whenever files within the directory change you can just os.stat the directory in a delay-loop, which helps.
It's much better to use an operating system specific notification API. Were you using Java I'd tell you to use the NIO2 watch service, which handles all the platform specifics for you. It looks like Watchdog may offer something similar for Python, but I haven't needed to do directory change notification in my Python coding so I haven't tested it. If it doesn't work out you can use platform-specific techniques like inotify/dnotify for Linux, and the various watcher APIs for Windows.
See also:
How do I watch a file for changes?
Python daemon to watch a folder and update a database
You can't use LISTEN/NOTIFY because that can only send messages from within the database and your files obviously aren't in there.
You'll want to have your python script scan the directory the files are in and check their modification time (mtime). If they are updated, you'll need to read in the files, parse the data and insert it to the db. Without knowing the format of the files, there's no way to be more specific.

How can I debug a Perl CGI script?

I inherited a legacy Perl script from an old server which is being removed. The script needs to be implemented on a new server. I've got it on the new server.
The script is pretty simple; it connects via expect & ssh to network devices and gathers data. For debugging purposes, I'm only working with the portion that gathers a list of the interfaces from the device.
The script on the new server always shows me a page within about 5 seconds of reloading it. Rarely, it includes the list of interfaces from the remote device. Most commonly, it contains all the HTML elements except the list of interfaces.
Now, on the old server, sometimes the script would take 20 seconds to output the data. That was fine.
Based on this, it seems that apache on the new server is displaying the data before the Perl script has finished returning its data, though that could certainly be incorrect.
Additional Information:
Unfortunately I cannot post any code - work policy. However, I'm pretty sure it's not a problem with expect. The expect portions are written as expect() or die('error msg') and I do not see the error messages. However, if I set the expect timeout to 0, then I do see the error messages.
The expect timeout value used in the script normally is 20 seconds ... but as I mentioned above, apache displays the static content from the script after about 5 seconds, and 95% of the time does not display the content that should retrieved from expect. Additionally, the script writes the expect content to a file on the drive - even when the page does not display it.
I just added my Troubleshooting Perl CGI scripts guide to Stackoverflow. :)
You might try CGI::Inspect. I haven't needed to try it myself, but I saw it demonstrated at YAPC, and it looked awesome.

Detect a file in transit?

I'm writing an application that monitors a directory for new input files by polling the directory every few seconds. New files may often be several megabytes, and so take some time to fully arrive in the input directory (eg: on copy from a remote share).
Is there a simple way to detect whether a file is currently in the process of being copied? Ideally any method would be platform and filesystem agnostic, but failing that specific strategies might be required for different platforms.
I've already considered taking two directory listings separaetd by a few seconds and comparing file sizes, but this introduces a time/reliability trade-off that my superiors aren't happy with unless there is no alternative.
For background, the application is being written as a set of Matlab M-files, so no JRE/CLR tricks I'm afraid...
Edit: files are arriving in the input directly by straight move/copy operation, either from a network drive or from another location on a local filesystem. This copy operation will probably be initiated by a human user rather than another application.
As a result, it's pretty difficult to place any responsibility on the file provider to add control files or use an intermediate staging area...
Conclusion: it seems like there's no easy way to do this, so I've settled for a belt-and-braces approach - a file is ready for processing if:
its size doesn't change in a certain period of time, and
it's possible to open the file in read-only mode (some copying processes place a lock on the file).
Thanks to everyone for their responses!
The safest method is to have the application(s) that put files in the directory first put them in a different, temporary directory, and then move them to the real one (which should be an atomic operation even when using FTP or file shares). You could also use naming conventions to achieve the same result within one directory.
Edit:
It really depends on the filesystem, on whether its copy functionality even has the concept of a "completed file". I don't know the SMB protocol well, but if it has that concept, you could write an app that exposes an SMB interface (or patch Samba) and an API to get notified for completed file copies. Probably a lot of work though.
This is a middleware problem as old as the hills, and the short answer is: no.
The two 'solutions' put the onus on the file-uploader: (1) upload the file in a staging directory and then move it into the destination directory (2) upload the file, and then create/upload a 'ready' file that indicates the state of the content file.
The 1st one is the better, but both are inelegant. The truth is that better communication media exist than the filesystem. Consider using some IPC that involves only a push or a pull (and not both, as does the filesystem) such as an HTTP POST, a JMS or MSMQ queue, etc. Furthermore, this can also be synchronous, allowing the process receiving the file to acknowledge the content, even check it for worthiness, and hand the client a receipt - this is the righteous road to non-repudiation. Follow this, and you will never suffer arguments over whether a file was or was not delivered to your server for processing.
M.
One simple possibility would be to poll at a fairly large interval (2 to 5 minutes) and only acknowledge the new file the second time you see it.
I don't know of a way in any OS to determine whether a file is still being copied, other than maybe checking if the file is locked.
How are the files getting there? Can you set an attribute on them as they are written and then change the attribute when write is complete? This would need to be done by the thing doing the writing ... which sounds like it isn't an option.
Otherwise, caching the listing and treating a file as new if it has the same file size for two consecutive listings is the best way I can think of.
Alternatively, you could use the modified time on the file - the file has to be new and have a modified time that is at least x in the past. But I think this will be about equivalent to caching the listing.
It you are polling the folder every few seconds, its not much of a time penalty is it? And its platform agnostic.
Also, linux only: http://www.linux.com/feature/144666
Like cron but for files. Not sure how it deals with your specific problem - but may be of use?
What is your OS. In unix you can use the "lsof" utility to determine if a user has the file open for write. Apparently somewhere in the MS Windows Process Explorer there is the same functionality.
Alternativly you could just try an exclusive open on the file and bail out of this fails. But this can be a little unreliable and its easy to tread on your own toes.