I wanna build external editor for scripts stored in crm.
I have API which exposes script engine returning list of scripts containing ,script id, script name, content...
Editor will be Visual Studio Code plus my extension. Manage to build extension and get data, store scripts on in the folder and give them name from the response. Problem is in update, to send data back to the server i need to use id not the name.
Question: is there any kind of container in workspace where i can store all data related to every script such as Id?
Something like settings for every script containing scriptName:id, date, owner...
Maybe https://code.visualstudio.com/api/extension-capabilities/common-capabilities#data-storage can help? You can store data either locally, in the workspace, or globally, in the user' storage space. Can be simple key/value pairs, or your own format.
Related
I am trying to get a list of datasources that a Powerbi file is using. I seen solutions online where I can use the ReportingService module to get a list but this only works when the PowerBI report is published online. Is there a solution that would work for a local file?
Here is the situation.
A user gives me a Powerbi file. In order for me to get a list of datasources, I have to go in manually and to take a look at sources manually. Ideally, I would like to use Powershell to get this list.
There isn't an API that can access the desktop application. You would have to brute force it.
The PBX file is basically a Zip file which then contains separate files with JSON information. You would have to follow the following steps:
Use Expand-Archive to get the files out of the PBX (Not sure if you will need to change the file extension first).
Read the "Connections" file (Which is Json). It will have the various connection strings used by the model.
You can do this manually by changing the file extension to Zip and opening the Zip file directly, and looking at the connections file in notepad.
I often run scripts on remote machines and sometimes create custom html dashboards to monitor progress. I was wondering if something like that would be better done taking advantage of vscode's ssh extension, which I use to edit files remotely anyway.
For example, I'd like to display a custom panel with information derived from a list of files in the remote server. My traditional approach would be to run a web server to get the file contents in some custom json format and create a custom html client to display the data. Would it be possible to skip the web server part and instead create some custom vscode extension that gets the data via some ssh api?
It might be your case to use LogTail extension to fetch your data and then proceed with your desired display method.
VSCode Marketplace for LogTail: https://marketplace.visualstudio.com/items?itemName=tiansin.logtail
I hope it helps :)
I am using CloverETL Designer for ETL operations and I want to load some csv files from GCS to my Clover graph. I used FlatFileReader and tried to get file using remote File URL but it is not working. Can someone please detail the entire process here??
The path for file in GCS is
https://storage.cloud.google.com/PATH/Write_to_a_file.csv
And I need to get this csv file into the FlatFileReader in CloverETL Designer
You should use the Google Cloud Storage API to GET the file; Clover's HTTPConnector component will allow you to pass in the appropriate parameters to make a GET request (you will presumably have to do an OAuth2 authentication first to get a token), and send the output to a local destination specified in "Output File URL." Then you can use a FlatFileReader to read from that local file.
GCS has several different ways to download files from your buckets. You can use the console and the Cloud Storage browser. Steps: open the storage browser, navigate to the object you want to download, right click, and save to your chosen local folder. If you use Chrome the save appears as “Save Link As…”.
To use the GS Utility, use this command:
`gsutil cp gs://[BucketName]/[ObjectName] [ObjectDestination]`.
Or you can use client libraries or the REST APIs to download files. With these last options you could work with a number of files or create a job to download them. Once they are in a location known to Clover ETL the process is straightforward.
Within Clover designer, under the navigation pane you can right click a folder and choose import. Pick the one in which you placed your GCS file. Once the file is imported then you can use data from it like any other datafile in Clover. Since this is a .csv file, remember to edit your metadata (right click the component, choose extract metadata then edit inside the Metadata Editor -- for data types, labels and such.) Assign metadata to the edges of your components so they know what is coming in/going out of that step. Depending on your file, this process may be repeated many times.
Even with an ETL tool, getting the data and data types correct can be tricky. If you have questions about how to configure data types or your edges in an ETL project, a wiki may help. The web has additional resources may help you get the end analysis you’re looking for.
I'm hoping someone can help. I've started using the Community TFS Build Extensions, in particular the FTP activity. I followed the documentation here and got to grips with the it pretty easily. I'm encountering one major problem though.
My Web app has a basic enough structure:
I start by creating the FindMatchingFile activity which places the files in the drop location into an IEnumberable variable called FilesToFTP :
String.Format("{0}\**\*.*", BuildDetail.DropLocation)
When I iterate through the variable and print out the results, all seems correct:
G:\builds\Build.1203\CredentialManagement\bin\BusLogic.dll
G:\builds\Build.1203\CredentialManagement\css\style.css
G:\builds\Build.1203\CredentialManagement\AppError.aspx
......
G:\builds\Build.1203\CredentialManagement\Web.config
etc etc.
The problem is, when I pass that IEnumerable to the Ftp activity (converting it to a string array), it FTP uploads all the files on the server however it doesn't keep the directory structure of my Web app. It just piles all the output (dlls, aspx etc) into one directory. See the following two screenshots.
Is there any way I can use the FTP activity to upload all the output from the drop location recursively? I feel like I'm doing something simple wrong.
The FTP activity in TFS Build Extensions doesn't upload files recursively.
I think it would be a good value addition to the activity. Please create a request for the project and we will add in it. For now, you can go around it by calling the Ftp activity recursively for each directory and setting the RemoteDirectory for each.
I would like a better workflow for debugging uploaded SCOs. As things are, I must edit a file in the activity, repackage, upload, and test. Often, I just need to change a single line of code. It would be VERY nice to be able to edit that file, that line of code, on the server. So far, all I've found is that Moodle manages the files, so it seems impractical to locate and decipher the renamed files after upload.
Is there a way to configure Moodle so that it doesn't rename and relocated files in SCOs upon extraction? Actually, I'm open to any suggestions on the best, fastest workflow for debugging SCOs.
Problem background
Since Moodle 2.0, files are no longer stored on server in the conventional /this/is/the/path/to/my.file way. Instead, files are rehashed and stored in Repositories (i.e. spread all over the moodledata folder as a collection of seemingly random data). This increases security and cross-OS compatibility but complicates stuff for people who would like to simply upload a SCORM zip package via FTP. Here's more information on file handling in Moodle 2.0
Path to the soluton
Let's locate the file you want to update, then update it.
Run phpmyadmin, go to mdl_files table, find your file by name in the filename field (let's say it's portrait.jpg)
Look at the contenthash field, it'll look like abcde1234567890. This means your file is stored in moodledata/filedir/ab/cd/ folder under the name abcde1234567890.
Rename the updated portrait.jpg to abcde1234567890, upload and overwrite.
Go back to phpmyadmin and update the filesize field in record for portrait.jpg with the size of the updated file.
Obviously, this process can be automated. You'll have to write a script that allows you to upload a file, then it'll search for that file in mdl_files, save it to the correct folder and update all fields accordingly.
Alternative idea
Enable external package type (and also enable 'Update on every launch'). Go to Site administration / Plugins / Activities / SCORM and check the box down below. Now you'll be able to launch SCORM packages directly from another server, so Moodle won't mess with it. Of course, you can run in other (probably cross-domain related) problems.
Sergey's answer is very good, with one caveat:
In his example with the contenthash of abcde1234567890, the file is stored in the moodledata/filedir/ab/cd/ folder under the name abcde1234567890. Moodle uses the full contenthash to name the file.