I understand that direct gremlin scripts are susceptible to Injection attacks and parametrizing them is the best option.
My question is if creating a GraphTraversal object and running it through GroovyTranslator to arrive at the Gremlin script also susceptible to Injection?
Is something like the following safe from Gremlin Injection point of view?
final String script = GroovyTranslator.of("g").translate(traversal .asAdmin().getBytecode());
Client.submitAsync(script);
Translator implementation don't do anything in particular to detected malicious injections, but I also can't quite imagine how a constructed traversal would end up in a state where the constructed Gremlin string produced by it would contain something like that. The Translator does not evaluate any of the arguments made to any of the steps in the traversal. In other words if you had:
gremlin> translator = GroovyTranslator.of('g')
==>translator[g:gremlin-groovy]
gremlin> x = "'bob');g.V().drop()"
==>'bob');g.V().drop()
gremlin> traversal = g.V().has('name',x)
gremlin> translator.translate(traversal)
==>g.V().has("name","'bob');g.V().drop()")
gremlin> x = "\");g.V().drop()"
==>");g.V().drop()
gremlin> traversal = g.V().has('name',x)
gremlin> translator.translate(traversal)
==>g.V().has("name","""\");g.V().drop()""")
As you can see, in the above example, the Translator just treats that value of x wholly as a String. The Translator will however evaluate Bytecode objects as arguments, but that is expected as some steps do accept Traversal as an argument. Therefore, if you evaluated an x input yourself to Bytecode then I suppose someone could slip something in. That seems unlikely. Of course, that would mean that your input you were accepting from users would be enabling them to write their own Gremlin essentially, so I don't really think that's an injection scenario.
Generally speaking, I think you mostly run into injection sorts of attacks in the same way you run into them with SQL, where you are manually constructing a Gremlin string and not checking input. I'd say it remains a best practice to validate your input in any case and to not dynamically evaluate any input directly (unless you are allowing users to submit their own Gremlin queries for some reason, in which case you may need additional guards depending on your use case).
Related
I wrote a small cdk construct that parses the logs in a cloudwatch log group via a lambda and sends a mail when a pattern is matched. This allows a developer to be notified via an sns topic, should an error appear in the logs.
The construct needs to know which log group to monitor, and which pattern to look for. These are currently passed in as parameters to its constructor. The user of my small construct library is supposed to use this construct as part of his stack. However, one could also define them as parameters, or better yet given what the docs say values in a context - basically using this construct in a standalone app.
Would this be an appropriate use of the context? What else it is useful for?
It's hard to say a definitive answer, but I would recommend always passing in properties to a construct explicitly on the constructor.
A) This creates consistency with the rest of the constructs.
B) Nothing is 'hidden' in your construct definition.
The only thing I've generally found context useful for is passing in parameters from the CLI, but even that is pretty rare and there are often better ways to do it.
I am working with a legacy scala codebase, and as is always the case modifying the code is quite difficult without touching different parts.
One of my new requirement in to make several decisions based on some input parameters. Problem is that these decisions are to be made at various points along the execution. So either I encapsulate all those parameters in a case class instance and pass it along. But it means I would have to modify multiple methods signatures, and I want to avoid this approach as much as possible.
Another approach can be to create a global object containing all those input parameters and accessible from different points in the execution. Is it a good approach in Scala?
No, using global mutable variables to pass “hidden” parameters is not a good idea, not in Scala and not in any other programming language. It makes the code hard to understand and modify, because a function's behaviour will now depend on which functions were invoked earlier. And it's extremely fragile, because you might forget setting one of those global parameters before invoking the function, which means that it will use whatever value was stored there before. This is the kind of thing that can appear to work for years, and then break when you modify a completely unrelated part of the program.
I can't stress this enough: do not use global mutable variables, period. The solution is to man up and change those method signatures. Depending on the details, dependency injection may or may not help in your particular case.
I have a program that can dynamically generate expressions in SMT-LIB format and I am trying to connect these expressions to CVC4 to test satisfiability and get the models. I am wondering if there is a convenient way to parse these strings through the CVC4 C++ API or if it would be best to just store the generated SMT-LIB code in a file and redirect input to the cvc4 executable.
A cursory look at their API doesn't reveal anything obvious, so I don't think they support this mode of operation. In general, loading such statements "on the fly" is tricky, since an expression by itself doesn't make much sense: You'd have to be in a context that has all the relevant sorts defined, along with all the definitions that your expressions rely on, including the selection of the proper logic. That is, for instance, why the corresponding function in z3 has extra arguments: https://z3prover.github.io/api/html/classz3_1_1context.html#af2b9bef14b4f338c7bdd79a1bb155a0f
Having said that, your best bet might be to ask directly at https://github.com/CVC4/CVC4/issues to see if they've something similar.
Let's say we have the following model:
Collector:
model Collector
Real collect_here;
annotation(defaultComponentPrefixes="inner");
end Collector;
and the following model potentially multiple times:
model Calculator
outer Collector collector;
Real calculatedVariable = 2*time;
equation
calculatedVariable = collector.collect_here;
end Calculator;
The code above works if calcModel is present only once in the system to be simulated. If the model exists more than once I get a singular system. This is demonstrated by the Example below. Changing the parameter works either gives a working or failing system.
model Example
parameter Boolean works = true;
inner Collector collector;
Calculator calculator1;
Calculator calculator2 if not works;
end Example;
Using an array inside the collector to pass multiple variables in it doesn't solve it.
Another possible way to solve this is possible by use of connectors, but I only made it work with one calcModel.
Using multiple instances of Calculator does brake the model, as the single variable calculatedVariable will have multiple equations trying to compute its value. Therefore Dymola complains that the system is structurally singular, in this case meaning that there are more equations than variables in the resulting system of equations.
To give a bit more of an insight: Actually checking Collector will fail, as since Modelica 3.0 every component has to be balanced (meaning it has to have as many unknowns as states), which is not the case for Collector as it does have one unknown but no equation. This strongly limits the possible applications for the inner/outer construct as basically every variable has to be computed where it is defined.
In the given example this is compensated in the overall system if exactly one Calculator is used. So this single combination will work. Although this works, it is something that should not be done - for the obvious reason of being very error-prone (and all sub-models should pass the check).
Your question on how to solve this issue actually misses a description of what the issue actually is. There are some cases in my mind that your approach could be useful for:
You want to plot multiple variables from a single point, which would be collector. For this purpose "variable selections" should be the most straight-forward way to go: see Dymola Manual Vol. 1, Section "4.3.11 Matching and variable selections" on how to apply them.
You want to carry out some mathematical operation on that variables. Then it could be useful to have a vectorized input of variable size. This enables an arbitrary number of connections to this input. For an example of this take a look at: Modelica.Blocks.Math.MultiSum
You want to route multiple signals between different models (which is unlikely judging from your description, but still): Then expandable connectors would be a good possibility. To get an impression of what that does take a look at Modelica.Blocks.Examples.BusUsage.
Hope this helps, otherwise please specify more clearly what you actually want to achieve with your code.
I prepared a demonstrative library for such scenario some days ago. You can access it at https://gist.github.com/beutlich/e630b2bf6cdf3efe96e5e9a637124fe1. If you read the documentation on Example2 you can see the link to an article from H. Elmqvis et. al., which is the clue to your problem. That is, you need a connector, and inherited connects from every Calculator to the one Collector.
I have heard people describe dynamic patching as a bit of a hack or at risk of breaking in future releases of Pd. This is reasonable enough, but it seems to imply that there are alternatives when building abstractions.
Dynamic patching seems to be useful for both instantiating a variable number of objects and connecting up to a variable number (a number defined at creation time - I personally don't need it to change after the fact, at this stage) of inlets and outlets within an abstraction.
Now I understand that the [clone] object can solve the problem of creating objects. I can see too that looping through send and receive objects would solve much of the connection issues with careful planning but what I do not understand is how objects like [trigger], [route] and [select] can be adjusted or replaced in some way? I fail to see how you would avoid using dynamic patching to, for example, create a [trigger f f] when the creation arg to your abstraction is 2, and a [trigger f f f] when the creation arg is 3. Again, the same with [route] and [select] and similar objects.
EDIT: The original question was perceived as too vague. I later posed a follow-up question in the comments which should really be here instead. As it happens, the answer to the follow-up provided a good answer to the original question, in my opinion. So to summarise and hopefully clarify, I was after a few "tools" to use when building abstractions so that I could limit my use of dynamic patching, if possible. These tools turned out to be:
using send and receive instead of inlets and outlets (although [initbang] can be used for creating inlets and outlets at instantiation).
using [clone]
chaining trigger, route and select objects using send and receive - for example, using [t b b] - [t b b] instead of [t b b b]. This means that the number of arguments in these objects can be defined at creation time with the help of [clone] for example. This is discussed in the Pd mailing list.
using [initbang] as indicated in the answer below.
After having attempted to build a drum machine with presets and an arbitrary number of tracks with my limited knowledge of dynamic patching techniques, I realised that there must be many ways of avoiding the problems I had when doing this, which were several! Of course, some things have to be done with dynamic patching and that's fine. It's just about creating manageable code.
This is really an answer to "follow-up question" in the comment¹, rather than the original question (which I consider too broad to be answered),
Is there a way to define an abstraction that has an argument that defines how many outlets the abstraction exposes?
Sure, just use $1 for that.
E.g. [gates 10] could create 10 outlets...
Presumably it could dynamically patch itself, but that doesn't seem like a good idea.
well, if you want an abstraction to have a dynamic API (that is: a variable number of inlets/outlets), then there is no way around dynamic patching.
Is this a good case for building your own external?
depends on what you actually want the external to do.
the iemguts library (disclaimer: of which I am the author) has everything in place to allow you to dynamically patch what you need.
Most important, there is [initbang], to create iolets before Pd tries to connect them (if you use [loadbang], the iolets will be created after Pd failed to connect to them).
It also includes a [canvasargs] object which allows you to get all the arguments to the abstraction (e.g. which simplifies the task of having the number of outlets equal the number of arguments - like [trigger] or [pack])
if instead you want to wrap the entire functionality of your abstraction into an external, that's of course also possible (and pretty simple in the realm of C).
Also keep in mind that other's might have already coded what you need.
¹ please don't abuse the comment field for follow-up questions. either update your original question (if the follow-up is a mere clarification of the original question) or post a new one.