I'm creating a jQuery Template system that recursively calls the same template as needed. Most of the examples of "recursive" templates are actually just templates calling other templates, even the ones that pass around the $item object to retain custom functions. In this case, I want to call the same template with a sub-portion of the original $data (AKA $item.data) while also passing around custom template functions passed in to the options of the original tmpl call.
// original template call
$("#someTemplate").tmpl(data, {
someFunction1: function(itemToCheck) {
return itemToCheck.p3;
},
someFunction2: function(itemToCheck) {
return itemToCheck.p1 + ", " + itemToCheck.p2;
}
}).appendTo(".results");
<!--in-template secondary call-->
{{tmpl($data.sub, $item) "#someTemplate"}}
Full code jsFiddle
It seems that passing around $item as the options parameter for the recursive call results in the data parameter being ignored, probably because $item.data contains the original object and it simply overwrites the new data parameter. As a result, at each recursive level, I am still operating at the original caller level in the object and making no further progress in the object structure (hence the stack issue).
Is there some other property of $item I need to use to pass around only the custom functions without having $item.data override the template data being passed in?
Update
You definitely cannot just pass $item as the options parameter to {{tmpl}}. After looking at the code, it sets data for the new template call and then blows it away with the original $item.data through jQuery.extend a few lines later.
While I cannot seem to find a variable on $item that wraps all the custom functions, I was able to work around this issue by passing in each custom function individually from the original $item object. Not ideal, but it works.
{{tmpl($data.sub, { someFunction1: $item.someFunction1, someFunction2: $item.someFunction2 }) "#someTemplate"}}
Related
I am writing a file manager for my Perl application. Information about each file is kept as an object. When I remove a file, I'd like to change the corresponding object's class to RemovedFile. For this class, call to any valid method of "File" would return a fatal error and a stack trace. This would be to catch cases where some stale reference to this object is kept (while it shouldn't).
I thought about two ways I could implement this:
"RemovedFile" inherits from "File" and redefines all its methods with a call to fatal error. Downside to this is that if I add a new method to "File" I need to add it to "RemovedFile" as well.
Adding a call to some empty method to every method of "File". "RemovedFile" would redefine this one method to report fatal error. (See code below for an example of what I mean). Downside to this is that every method of "File" would have to be bothered with calling the "remove_guard" in the beginning which IMO is not very clean.
# Inside File class:
sub any_method_of_file_class {
$self->_removed_guard();
#rest of code
}
sub _removed_guard {
#do nothing
}
# Inside RemovedFile class redefine only _removed_guard:
sub _removed_guard {
$self->{logger}->fatal_with_stack_trace();
}
I wanted to ask if there is any better way to implement this kind of behaviour in Perl?
For example, could I use some tricks to first list and then dynamically redefine all methods of a parent class without specifying their exact names?
You can define the RemovedFile class without any relation to the original File class. In RemovedFile, use AUTOLOAD to handle any method.
If you want to redefine (override) all the methods of the parent class, don't inherit from the parent class. You can use the same interface, but you don't need the connection to some module you are going to completely ignore.
I think I'd probably have a factory method in File that returns a new object for RemovedFile:
my $removed_file_obj = $file->remove;
That new class only knows what it needs to know about removed files. The remove can do whatever cleanup you require.
Then, when you are dealing with lists of objects, some of which may be File and some of which may be RemovedFile, filter the ones you want. This is outside of the class definitions because the class only defines the behavior of the objects and not how we employ the objects.
Here's one way to check by the object type, maybe even with the new isa feature:
use v5.32;
use experimental qw(isa);
foreach my $file ( #files ) {
# next if $file->isa( 'RemovedFile' );
next if $file isa 'RemovedFile';
...
}
But, you probably shouldn't check what something is. Check what it can do. Since you haven't inherited a bunch of methods that don't do anything, can should return false for that:
foreach my $file ( #files ) {
next if $file->can( 'some_method_not_in_RemovedFile' );
...
}
I have read in various places that Global variables are at best a code smell, and best avoided. At the moment I am working on refactoring a big function based PS script to classes, and thought to use a Singleton. The use case being a large data structure that will need to be referenced from a lot of different classes and modules.
Then I found this, which seems to suggest that Singletons are a bad idea too.
So, what IS the right way (in PS 5.1) to create a single data structure that needs to be referenced by a lot of classes, and modified by some of them? Likely pertinent is the fact that I do NOT need this to be thread safe. By definition the queue will be processed in a very linear fashion.
FWIW, I got to the referenced link looking for information on singletons and inheritance, since my singleton is simply one of a number of classes with very similar behavior, where I start with the singleton which contains collections of the next class, which each contain collections of the next class, to create a hierarchical queue. I wanted to have a base class that handled all the common queue management then extend that for the differing functionality lof each class. Which works great other than having that first extended class be a singleton. That seems to be impossible, correct?
EDIT: Alternatively, is it possible with this nested classes in a generic list property approach to be able to identify the parent from within a child? This is how I handled this is the Function based version. A global [XML] variable formed the data structure, and I could step through that structure, using .SelectNode() to populate a variable to pass to the next function down, and using .Parent to get information from higher up, and especially from the root of the data structure.
EDIT: Since I seem not to be able to paste code here right now, I have some code on GitHub. The example here of where the Singleton comes in is at line 121, where I need to verify if there are any other examples of the same task that have not yet comnepelted, so I can skip all but the last instance. This is a proof of concept for deleting common components of various Autodesk software, which is managed in a very ad hoc manner. So I want to be able to install any mix of programs (packages) and uninstall on any schedule, and ensure that the last package that has a shared component uninstall is the one that uninstalls it. So as to no break other dependent programs before that last uninstall happens. Hopefully that makes sense. Autodesk installs are a fustercluck of misery. If you don't have to deal with them, consider yourself lucky. :)
To complement Mathias R. Jessen's helpful answer - which may well be the best solution to your problem - with an answer to your original question:
So, what IS the right way (in PS 5.1) to create a single data structure that needs to be referenced by a lot of classes, and modified by some of them [without concern for thread safety]?
The main reason that global variables are to be avoided is that they are session-global, meaning that code that executes after your own sees those variables too, which can have side effects.
You cannot implement a true singleton in PowerShell, because PowerShell classes do not support access modifiers; notably, you cannot make a constructor private (non-public), you can only "hide" it with the hidden keyword, which merely makes it less discoverable while still being accessible.
You can approximate a singleton with the following technique, which itself emulates a static class (which PowerShell also doesn't support, because the static keyword is only supported on class members, not the class as a whole).
A simple example:
# NOT thread-safe
class AlmostAStaticClass {
hidden AlmostAStaticClass() { Throw "Instantiation not supported; use only static members." }
static [string] $Message # static property
static [string] DoSomething() { return ([AlmostAStaticClass]::Message + '!') }
}
[AlmostAStaticClass]::<member> (e.g., [AlmostAStaticClass]::Message = 'hi') can now be used in the scope in which AlmostAStaticClass was defined and all descendant scopes (but it is not available globally, unless the defining scope happens to be the global one).
If you need access to the class across module boundaries, you can pass it as a parameter (as a type literal); note that you still need :: to access the (invariably static) members; e.g.,
& { param($staticClass) $staticClass::DoSomething() } ([AlmostAStaticClass])
Implementing a thread-safe quasi-singleton - perhaps for use
with ForEach-Object -Parallel (v7+) or Start-ThreadJob (v6+, but installable on v5.1) - requires more work:
Note:
Methods are then required to get and set what are conceptually properties, because PowerShell doesn't support code-backed property getters and setters as of 7.0 (adding this ability is the subject of this GitHub feature request).
You still need an underlying property however, because PowerShell doesn't support fields; again the best you can do is to hide this property, but it is technically still accessible.
The following example uses System.Threading.Monitor (which C#'s lock statement is based on) to manage thread-safe access to a value; for managing concurrent adding and removing items from collections, use the thread-safe collection types from the System.Collections.Concurrent namespace.
# Thread-safe
class AlmostAStaticClass {
static hidden [string] $_message = '' # conceptually, a *field*
static hidden [object] $_syncObj = [object]::new() # sync object for [Threading.Monitor]
hidden AlmostAStaticClass() { Throw "Instantiation not supported; use only static members." }
static SetMessage([string] $text) {
Write-Verbose -vb $text
# Guard against concurrent access by multiple threads.
[Threading.Monitor]::Enter([AlmostAStaticClass]::_syncObj)
[AlmostAStaticClass]::_message = $text
[Threading.Monitor]::Exit([AlmostAStaticClass]::_syncObj)
}
static [string] GetMessage() {
# Guard against concurrent access by multiple threads.
# NOTE: This only works with [string] values and instances of *value types*
# or returning an *element from a collection* that is
# only subject to concurrency in terms of *adding and removing*
# elements.
# For all other (reference) types - entire (non-concurrent)
# collections or individual objects whose properties are
# themselves subject to concurrent access, the *calling* code
# must perform the locking.
[Threading.Monitor]::Enter([AlmostAStaticClass]::_syncObj)
$msg = [AlmostAStaticClass]::_message
[Threading.Monitor]::Exit([AlmostAStaticClass]::_syncObj)
return $msg
}
static [string] DoSomething() { return ([AlmostAStaticClass]::GetMessage() + '!') }
}
Note that, similar to crossing module boundaries, using threads too requires passing the class as a type object to other threads, which, however is more conveniently done with the $using: scope specifier; a simple (contrived) example:
# !! BROKEN AS OF v7.0
$class = [AlmostAStaticClass]
1..10 | ForEach-Object -Parallel { ($using:class)::SetMessage($_) }
Note: This cross-thread use is actually broken as of v7.0, due to classes currently being tied to the defining runspace - see this GitHub issue. It is to be seen if a solution will be provided.
As you can see, the limitations of PowerShell classes make implementing such scenarios cumbersome; using Add-Type with ad hoc-compiled C# code is worth considering as an alternative.
This GitHub meta issue is a compilation of various issues relating to PowerShell classes; while they may eventually get resolved, it is unlikely that PowerShell's classes will ever reach feature parity with C#; after all, OOP is not the focus of PowerShell's scripting language (except with respect to using preexisting objects).
As mentioned in the comments, nothing in the code you linked to requires a singleton.
If you want to retain a parent-child relationship between your ProcessQueue and related Task instance, that can be solved structurally.
Simply require injection of a ProcessQueue instance in the Task constructor:
class ProcessQueue
{
hidden [System.Collections.Generic.List[object]]$Queue = [System.Collections.Generic.List[object]]::New()
}
class Task
{
[ProcessQueue]$Parent
[string]$Id
Task([string]$id, [ProcessQueue]$parent)
{
$this.Parent = $parent
$this.Id = $id
}
}
When instantiating the object hierarchy:
$myQueue = [ProcessQueue]::new()
$myQueue.Add([Task]#{ Id = "id"; Parent = $myQueue})
... or refactor ProcessQueue.Add() to take care of constructing the task:
class ProcessQueue
{
[Task] Add([string]$Id){
$newTask = [Task]::new($Id,$this)
$Queue.Add($newTask)
return $newTask
}
}
At which point you just use ProcessQueue.Add() as a proxy for the [Task] constructor:
$newTask = $myQueue.Add($id)
$newTask.DisplayName = "Display name goes here"
Next time you need to search related tasks from a single Task instance, you just do:
$relatedTasks = $task.Parent.Find($whatever)
Good day,
I have several partials that have code. In the code tab, I noticed that the code tab had similar looking code. Here are examples
Partial 1
function onStart()
{
$x = MyModel1::where('myColumn', 'myValue')->first();
// lots of stuff using $x functions
$this['viewData'] = $x->getViewData();
}
Partial 2
function onStart()
{
$x = MyModel2::where('myColumn', 'myValue')->first();
// lots of stuff using $x functions
$this['viewData'] = $x->getViewData();
}
MyModel1 and MyModel2 both implement the same interface, so they have the same functions.
My question is, where do I put the code that is similar? I can put it in a plugin but that doesn't feel correct. I can create a base class and have the partials call the parent method but won't that mean modifying the code in the vendor folder?
if you really need to manage your code you can create component and add that code there as they easily attached to other pages (down point is that you need to create a plugin)
you can write your code inside onRun method.
https://octobercms.com/docs/plugin/components#page-cycle
and instead directly assigning variables to this you need to assign them like
$this->page['var'] = 'value';
and now it will work same as you are doing.
I have a class:
class Hello {
function doSomething(&$reference, $normalParameter) {
// do stuff...
}
}
Then I have a controller:
class myController {
function goNowAction() {
$hello = new Hello();
$var = new stdClass();
$var2 = new stdClass();
$bla = $hello->doSomething($var, $var2);
}
}
The "goNow" action I call using my tests like so:
$this->dispatch('/my/go-now');
I want to mock the "doSomething" method so it returns the word "GONOW!" as the result. How do I do that?
I've tried creating a mock
$mock = $this->getMock('Hello ', array('doSomething'));
And then adding the return:
$stub->expects($this->any())
->method('discoverRoute2')
->will($this->returnValue("GONOW!"));
But I'm stumped as to how to hook this up to the actual controller that I'm testing. What do I have to do to get it to actually call the mocked method?
You could create a mock for the reference, or if it is just a simple reference as your code shows, send a variable. Then the normal mock call may be called and tested.
$ReferenceVariable= 'empty';
$mock = $this->getMock('Hello ', array('doSomething'));
$stub->expects($this->any())
->method('discoverRoute2')
->will($this->returnValue("GONOW!"));
$this->assertEquals('GONOW!', $stub->doSomething($ReferenceVariable, 'TextParameter'));
Your example code does not explain your problem properly.
Your method allows two parameters, the first being passed as a reference. But you create two objects for the two parameters. Objects are ALWAYS passed as a reference, no matter what the declaration of the function says.
I would suggest not to declare a parameter to be passed as a reference unless there is a valid reason to do so. If you expect a parameter to be a certain object, add a typehint. If it must not be an object, try to avoid passing it as a reference variable (this will lead to confusing anyways, especially if you explicitly pass an object as a reference because everybody will try to figure out why you did it).
But your real question is this:
But I'm stumped as to how to hook this up to the actual controller that I'm testing. What do I have to do to get it to actually call the mocked method?
And the answer is: Don't create the object directly in the controller with new Hello. You have to pass the object that should get used into that controller. And this object is either the real thing, or the mock object in the test.
The way to achieve this is called "dependency injection" or "inversion of control". Explanaitions of what this means should be found with any search engine.
In short: Pass the object to be used into another object instead of creating it inside. You could use the constructor to accept the object as a parameter, or the method could allow for one additional parameter itself. You could also write a setter function that (optionally) gets called and replaces the usual default object with the new instance.
I couldn't find any reference on how to use a parent form element in a subclassed form. May be because it's obvious to everyone but me. It's got me stumped. This is what I tried.
At first, within my form constructor I called
parent::__construct($options = null);
then accessed the parent elements like this
$type = parent::setName($this->type);
The problem was that ALL the parent form elements would display whether explicitly called or not. Someone said, "don't use __construct(), use the init() function instead. So I changed the constructor to init(), commented out the parent constructor, then ran the form. It bombed saying it couldn't pass an empty value for setName(). I commented out all the seName() calls and the form ran, but only displayed the elements instantiated in the subclassed form.
My question is this: If I don't use the parent constructor, how do i get and use the parent's form elements?
Solved: Since the constructor was switched to init, the call to the parent also needed to be switched. Easy for someone with php background. Not so much for one who doesn't.
Use
parent::init();
Solved: Since the constructor was switched to init, the call to the parent also needed to be switched. Easy for someone with php background. Not so much for one who doesn't.
Use
parent::init();
You should learn OOP principles first. Obviously you have no understanding of it whatsoever. You need to call parent::init() in you Form_Class::init() method as you wrote, but why? Because otherwise the parent method is not called and is overriden by the From_Class method.
Other thing is that when you have a parent class "SuperForm" with input and submit, then your "SuperForm_Subclass" would have the same elements assigned. There is no need to use "parent::*" to access element (only exception would be if you used static SuperForm variable to store them - which makes no sense).
You can easily use $this->inputElement and $this->submitElement inside your SuperForm_Subclass like you would in the SuperForm class.
In your example you could used the __contruct() as good, but with the same condition of calling the parent constructor. You would be able to access elements generated there too...