How to run user code on server in different languages? - server

I would like to make an application simmilar to CodeFights in that a user is given a set of excercises for which he has to code solutions. I figured out how to take a users JavaScript code and run it on a Node.js server using for example the Function constructor:
try {
var solver = new Function('arg1', 'arg2', bodyAsString);
} catch(e) {
console.log('Function cannot be parsed');
}
Good thing about this approach is that the solver(...,...) function now has access to global scope only and not the scope it was made in. Even better one could also combine this with a sandbox module.
But what is the best approach if I want to use multiple languages - let's say JavaScript, Python and C++. How do I go about solving this problem if I want to give the user a choice of language? Do I write his solution to a file and try to execute it via command line also in some kind of a sandbox mode? Can this even be done if I use Node.js as backend?

Related

Visual Studio Code Extension Variable

Does anyone know if it's possible to write an extension in Visual Studio code that can write the values in the variables window to a file after each step while in debug mode. So, a step would happen, the variable would get written to a file, another step would happen, and the next set of variables would be written. Is there anything like this out there? Can it even be done?
Here's what I have so far:
1) I created an extension in VS code, and will write my functionality in the following provided method:
function activate(context) {
let disposable = vscode.commands.registerCommand('extension.helloWorld', function () {
// Display a message box to the user
vscode.window.showInformationMessage('Hello World!');
});
2) There are some variables offered here that could be relevant (such as debugger), but they throw errors when I use them.
To explain more clearly, what I would like to do is the following: create an extension in VS code that when used launches another instance of VS code (as does every extension), and within this new instance, load a user program, debug the program step by step, and write the variables at each step of the user program to a file. This main reason for even making the extension is the last step, as VS Code doesn't have any built in functionality to do anything like this that I know of. One issue is I am not sure the 'debugger' variable in the extension (or any variables) refer to the new instance, and if there is a way to do this.
If there is a better and smarter approach, I would love to hear it.
Thank you very much everyone in advance

#karate How to pass parameter to a feature file in gatling simulation class?

Let's consider a scenario, we have to run the performance test for "create an account api" which takes input as header/path param "Auth token" and input data like user account name . So for above scenario we have 2 feature file as,
to run performance test for POST http://baseUrl/auth_param/create/input_data
1. One feature(e.g: generateAuth.feature) file which will have the auth
token
2. Second feature(createAccount.feature) file which take parameter as a
auth token, input data.
Here is my simulation class,
class <MyClass> extends Simulation {
before {
println("Simulation is about to start!")
}
val generateAuthTest = scenario("generateAuth").exec(karateFeature("classpath:path/generateAuth.feature"))
val createAccountTest = scenario("test").exec(karateFeature("classpath:path/createAccount.feature"))
setUp(
createAccountTest.inject(rampUsers(1) over (10 seconds))).maxDuration(1 minutes)
after {
println("Simulation is finished!")
}
}
Here, can i read auth from generateAuth.feature file which is input for createAccount.feature file, so that i can pass as a parameter?
Please suggest me how to pass parameters to createAccount.feature while calling in karateFeature method.
Let me put a requirement here,
let's say we have some feature files for CRUD operations on a particular data. Here how i go to write functional scenario,
I will create new feature file to write a scenario
just use CRUD files to test a SINGLE flow.
Now if i go for Performance test cases on individual operation, i feel there are 2 ways,
Create new 4 performance test feature files (one for each CRUD
method) and call these CRUD feature files in the respective test
feature file. Finally we just call test feature files in the
respective gatling simulation class.
**(In this case, I will end up with creating more test feature files as well simulation classes for
performance, which I want to avoid) **
Just call CRUD files in the respective gatling simulation class and
pass the required parameters to them.(In this case , we just need to create only 4 simulation
classes and run them on the basic of operation like create,read,delete and so on)
Here just wanted to know 2nd way of performance test, is it achievable or not in karate and if yes please let me know how?
Summary- I think its achievable using 3rd feature file (extra) for
individual use case but I do not want to make an extra feature file
for each case so that I can avoid maintenance work and can take
advantage of re-usability of existing feature file from functional
test to performance test.
Just use the normal Karate concepts such as karate-config.js
You can easily switch environments by setting the karate.env system property.
For example:
mvn test -DargLine="-Dkarate.env=e2e"
EDIT: after you edited your question, it is clear you have a SINGLE flow you want to test. please use a SINGLE feature. I suggest you move the generateAuth into the Background of the feature. Also refer to the docs on callSingle() for advanced options.
If you are expecting 2 feature files to magically share data that is not possible and not needed if you structure your tests correctly.
If you really really need this, please create a Java singleton and access it from each feature. Totally don't recommend this though.
EDIT: In Karate 0.9.0 onwards, you can call a single scenario within a feature if it has a tag:
classpath:animals/cats/create.feature#sometagname

UniData List all avaiable subroutines / All parameters

I am trying to wrap some UniData Subroutines to SOAP Web Service. I'm planning to use C# and UODOTNET library (IBM U2 Data Management Interface for .NET). Also I'm looking to create an engine to read all the available Subroutines from data server and also reads all the required parameters and dynamically generate required codes for Web Service.
My code would be something like this:
var session = UniObjects.OpenSession(
"192.168.0.1",
"user",
"password",
"account"
);
var cmd = session.CreateUniCommand();
cmd.Command = "LIST SUBURB.INDEX"; // ?????
cmd.Execute();
var res = cmd.Response;
Question 1: Is there any command that I can use to retrieve the list of all available subroutines?
Question 2: Is there any command that I can use to retrieve list of all parameters for specific subroutine?
Cheers
The short answer is no.
The longer answer is yes, but with a lot of work.
Since you are asking this question, I'm going to assume you are missing a lot of generally knowledge about the platform. Hence to be able to do this you'll need to:
Learn about how VOC works, specifically how executable code can be catalogued here.
Learn about the CATALOG and how cataloguing programs globally, locally and direct differ.
Understand how your system in particular is designed. Some places have everything directly catalogued in the VOC, others are a mix. If the former, it'll be easier for your question.
Once you understand the above, you'll know how to get a list of all executable programs from VOC, local catalog and global catalog. For example, a simplified example for the VOC is the UniQuery command LIST VOC WITH F1="C".
The hard part is getting the parameter list, of which there isn't any system command. To do this you have 2 options.
Reverse engineer the byte code of every subroutine and tease out the number of parameters
If you have access to the related source code, parse it to generate the list.
Just wanted to add a comment on this one, in UniData there is a MAKE.MAP.FILE command that will identify Programs and Subroutines ( and the number of parameters) which puts the information in the '_MAP_' file. This does not tell you what the parameters are used for, but it helps.

FileMaker MissingFunction

Set Variable [$Write; Value: <Function Missing>("filepath";$inputedText)]
I'm trying to determine what the missing function is. I'm trying to write data to an external file with this script, and this is one line of code from the script. I can't post the rest of the code for security reasons. Any direction as to what the missing function would be would be greatly appreciated.
The < Function Missing> message means that this code was written with the expectation that a now-missing plugin would be present. To resolve this, you'll need to determine which plugin this is, and install this on your development machine (and likely on all machines needing to use this script, unless you choose to write this to execute as a PSOS script running on the server).
My best guess based on functionality and the arguments being passed is that the missing plugin may be the Monkeybread Plugin.
It's the Write To File function in ScriptMaster

Problems with Perl XML-RPC in combination with Perl reflection?

I'm using Frontier::Daemon to build a test library server for Robot Framework test automation framework. I got the test library server working for executing the code locally, but when it runs/executes over XML-RPC, that is when I run into problems. Part of the issue might also be because I'm using Perl reflection to execute test commands.
Maybe RPC::XML might be a better fit, but at the time I was developing the server, Frontier::Daemon seemed easier to start off with.
The Perl reflection code was borrowed from threads posted on this site as well as Wikipedia's page on code reflection (Perl section).
The code is hosted at Google Code, you can browse the code or check it out for review. The issue is described in more detail at the project site.
I was hoping the Perl developer community could give me some pointers on the source of the problem and how to fix it.
Thanks,
Dave
There are a couple things you are missing. First, Frontier::Daemon calls the "methods" you provide it as simple subroutine calls, but your two provided methods expect to be called as methods of your remote server object. Change your code to do this:
my $svr = Frontier::Daemon->new(
methods => {
get_keyword_names => sub { $self->get_keyword_names(#_) },
run_keyword => sub { $self->run_keyword(#_) },
},
...
to call your methods as they seem to expect.
Second, your get_keyword_names tries to return an array, but the interface you are using seems to only allow a single return value and is calling the methods in scalar context, causing get_keyword_names to return the count of elements in the array. I think you want to be returning a reference to the array instead:
return \#methods;