How do I monkey-patch a Jekyll extension or plugin? - plugins

I'd like to override a gem method (a Jekyll extension) that looks like this:
File: lib/jekyll-amazon/amazon_tag.rb.
module Jekyll
module Amazon
class AmazonTag < Liquid::Tag
def detail(item)
...
end
end
end
end
Liquid::Template.register_tag('amazon', Jekyll::Amazon::AmazonTag)
I have placed code with the same structure in my project in the folder config/initializers/presentation.rb _plugins/presentation.rb. If I change the name of the method detail to a new name, it works, but I can't get it to override the name detail.
What have I done wrong?
(Note: In version 0.2.2 of the jekyll-amazon gem, the detail method is private; I have changed this locally so that the method is no longer private.)

You can use alias_method
module Jekyll
module Amazon
class AmazonTag < Liquid::Tag
alias_method :old_detail, :detail
def detail(item)
# do your stuff here
# eventually pass your stuff to old method
old_detail(item)
end
end
end
end
Liquid::Template.register_tag('amazon', Jekyll::Amazon::AmazonTag)

Related

Fatal error: Uncaught ArgumentCountError: Too few arguments to function TYPO3\CMS\Core\Imaging\IconFactory::__construct()

After following the composer installation guide for v10 of typo3. I pointed apache vhost to the public folder. Once I navigate to the index.php location in the browser, I get this error
Fatal error: Uncaught ArgumentCountError: Too few arguments to function
TYPO3\CMS\Core\Imaging\IconFactory::__construct()
0 passed in /home/user/projects/typo3/public/typo3/sysext/core/Classes/Utility/GeneralUtility.php
on line 3423
and exactly 2 expected in
/home/user/projects/typo3/public/typo3/sysext/core/Classes/Imaging/IconFactory.php:71
It looks like a dependency injection problem. Please can anybody help with this error
For me this issue occured after moving an existing project from a server into DDEV (which is similar to changing the path/URL by a vhost config). My guess is it has to do with changed paths/URLs in cached files. This is how I solved it:
A) Manually delete all cached files:
t3project$ rm -rf public/typo3temp/*
t3project$ rm -rf var/*
B) Also I had to change the ownership of some autogenerated folders/files to my current user (sudo chown -R myuser:myuser t3project/), then I was able to use the "Fix folder structure" tool in "Environment > Directory Status", now everything was working fine again. Not sure if the last step is helpful for you, as it might be only related to my case where certain folder/files had a wrong owner as they was copied.
I had the same problem today and it occured because I was XClass'ing one of the Core Classes and used GeneralUtility::makeInstance(IconFactory::class) in this code.
The fix is to use DI in this class, just as you suggested. Also flush all caches afterwards to rebuild the DI container.
From this:
class CTypeList extends AbstractList
{
public function itemsProcFunc(&$params)
{
$fieldHelper = GeneralUtility::makeInstance(MASK\Mask\Helper\FieldHelper::class);
$storageRepository = GeneralUtility::makeInstance(MASK\Mask\Domain\Repository\StorageRepository::class);
...
To this:
class CTypeList extends AbstractList
{
protected StorageRepository $storageRepository;
protected FieldHelper $fieldHelper;
public function __construct(StorageRepository $storageRepository, FieldHelper $fieldHelper)
{
$this->storageRepository = $storageRepository;
$this->fieldHelper = $fieldHelper;
}
public function itemsProcFunc(&$params)
{
$this->storageRepository->doStuff();
$this->fieldHelper->doStuff();
...
For future reference for others:
This can also happen in own extensions when the Core uses GeneralUtility::makeInstance on your classes. (e.g. in AuthenticationServices).
The trick here is to make these DI services public like so:
(in extension_path/Configuration/Serivces.yaml)
services:
_defaults:
autowire: true
autoconfigure: true
public: false
Vendor\ExtensionName\Service\FrontendOAuthService:
public: true
Here's documentation for it:
https://docs.typo3.org/m/typo3/reference-coreapi/master/en-us/ApiOverview/DependencyInjection/Index.html#knowing-what-to-make-public
I had this error because i used the Services.yaml file in one of my extensions, but did not configure it correct.
More infos about the file itself can be found here
Since the file is responsible for the dependency injection, small mistakes e.g. in namespaces lead to the above mentioned error.
To locate the error you can uninstall extensions with a Services.yaml.
When you have found the file/extension, you have to check if all Namespaces in the Classes Directory are correct.
This means:
All filenames are correct regarding the Class they contains
All Namespaces in the files are correct for path and filename
The Namespace can be found via composer. So the extension have to be installed via composer or must have an entry in the autoload list of composer.json

How to include files from same directory in a module using Cargo/Rust?

I have a Cargo project consisting of three files in the same directory: main.rs, mod1.rs and mod2.rs.
I want to import functions from mod2.rs to mod1.rs the same way I would import functions from mod1.rs to main.rs.
I've read about the file structure required but I don't get it - naming all the imported files mod will lead to minor confusion in the editor and also this just complicates the project hierarchy.
Is there a way to import/include files independently of directory structure as I would in Python or C++?
main.rs:
mod mod1; // Works
fn main() {
println!("Hello, world!");
mod1::mod1fn();
}
mod1.rs:
mod mod2; // Fails
pub fn mod1fn() {
println!("1");
mod2::mod2fn();
}
mod2.rs:
pub fn mod2fn() {
println!("2");
}
Building results in:
error: cannot declare a new module at this location
--> src\mod1.rs:1:5
|
1 | mod mod2;
| ^^^^
|
note: maybe move this module `src` to its own directory via `src/mod.rs`
--> src\mod1.rs:1:5
|
1 | mod mod2;
| ^^^^
note: ... or maybe `use` the module `mod2` instead of possibly redeclaring it
--> src\mod1.rs:1:5
|
1 | mod mod2;
| ^^^^
I can't use it as it doesn't exist as a module anywhere, and I don't want to modify the directory structure.
All of your top level module declarations should go in main.rs, like so:
mod mod1;
mod mod2;
fn main() {
println!("Hello, world!");
mod1::mod1fn();
}
You can then use crate::mod2 inside mod1:
use crate::mod2;
pub fn mod1fn() {
println!("1");
mod2::mod2fn();
}
I'd recommend reading the chapter on modules in the new version of the Rust book if you haven't already - they can be a little confusing for people who are new to the language.
Every file is a module and cannot import another without creating a new nested module.
a. Define modules in module index file
As #giuliano-oliveira's answer recommends.
Add pub mod mod1; pub mod mod2; in src/lib.rs / src/main.rs / src/foo/mod.rs.
b. Use #[path]
main.rs
#[path = "./mod2.rs"]
mod mod2;
fn run() { mod2::mod2fn() }
Why?
This is a common pitfall for new Rust devs and understandably so.
The reason for the confusion comes from an inconsistency in behavior of mod X for files in the same folder. You can use mod X in lib.rs which appears to import a file adjacent to it, but you can't do the same in mod1.rs or mod2.rs.
The code of every file belongs to a module. The full path of the file's module (e.g. foo::bar::baz) rather than the location of the file, determines how it resolves mod X. You can think of it as every module having a fixed spiritual home, but it may have members defined further up in the hierarchy (e.g. src/lib.rs might contain: mod foo { mod bar { pub fn hello() {} } } - although then you cannot use mod foo; alone in lib.rs).
In main.rs, you are in the top-level module crate.
mod mod1; creates a new module mod1, and adds the contents of ./mod1.ts to that module.
So all code inside ./mod1.rs is inside the crate::mod1 module.
When you call use mod2 inside ./mod1.rs, it sees that it is inside crate::mod1, whose spiritual home dir is src/mod1, and looks for either:
src/mod1/mod2.rs
src/mod1/mod2/mod.rs
The complexity comes from allowing modules to be directories and also files, instead of forcing each module to be its own directory (maybe trying to avoid the Java folder structure) which would have removed the ambiguity.
The key thing to remember is that lib.rs and mod.rs are special files that behave differently to other files in a directory.
They will always be in the module described by the parent folder path (e.g. src/foo/bar/mod.rs = foo::bar) while all other files belong to their own modules (src/foo/bar/baz.rs = foo::bar::baz).
The Rustonic Way
Rust has some opinions on this.
Using mod.rs is not recommended anymore, which is good because it has this special behavior from its siblings. But lib.rs and main.rs are still special.
If you want to put tests alongside your code (foo.rs + foo_test.rs), it's recommended you don't. I don't like this, so I use the path thing above, which I think is fine for tests because they are not exported anywhere. Having to declare tests in the module above feels wrong, and I don't like having to use foo/test.rs either.
If you don't want your mod statements all in your main file (eg: in main.rs you won't use some public members inside some module, in this example it is mod2) you can do the following:
structure your src this way:
main.rs
my_module:
mod.rs
mod1.rs
mod2.rs
then you can just mod my_module and use my_module::mod1, like so:
main.rs:
mod my_module;
use my_module::mod1;
fn main() {
println!("Hello, world!");
mod1::mod1fn();
}
my_module/mod.rs
pub mod mod1;
pub mod mod2;
my_module/mod1.rs
use super::mod2;
pub fn mod1fn() {
println!("1");
mod2::mod2fn();
}

pytest implementing a logfile per test method

I would like to create a separate log file for each test method. And i would like to do this in the conftest.py file and pass the logfile instance to the test method. This way, whenever i log something in a test method it would log to a separate log file and will be very easy to analyse.
I tried the following.
Inside conftest.py file i added this:
logs_dir = pkg_resources.resource_filename("test_results", "logs")
def pytest_runtest_setup(item):
test_method_name = item.name
testpath = item.parent.name.strip('.py')
path = '%s/%s' % (logs_dir, testpath)
if not os.path.exists(path):
os.makedirs(path)
log = logger.make_logger(test_method_name, path) # Make logger takes care of creating the logfile and returns the python logging object.
The problem here is that pytest_runtest_setup does not have the ability to return anything to the test method. Atleast, i am not aware of it.
So, i thought of creating a fixture method inside the conftest.py file with scope="function" and call this fixture from the test methods. But, the fixture method does not know about the the Pytest.Item object. In case of pytest_runtest_setup method, it receives the item parameter and using that we are able to find out the test method name and test method path.
Please help!
I found this solution by researching further upon webh's answer. I tried to use pytest-logger but their file structure is very rigid and it was not really useful for me. I found this code working without any plugin. It is based on set_log_path, which is an experimental feature.
Pytest 6.1.1 and Python 3.8.4
# conftest.py
# Required modules
import pytest
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
config=item.config
logging_plugin=config.pluginmanager.get_plugin("logging-plugin")
filename=Path('pytest-logs', item._request.node.name+".log")
logging_plugin.set_log_path(str(filename))
yield
Notice that the use of Path can be substituted by os.path.join. Moreover, different tests can be set up in different folders and keep a record of all tests done historically by using a timestamp on the filename. One could use the following filename for example:
# conftest.py
# Required modules
import pytest
import datetime
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
...
filename=Path(
'pytest-logs',
item._request.node.name,
f"{datetime.datetime.now().strftime('%Y%m%dT%H%M%S')}.log"
)
...
Additionally, if one would like to modify the log format, one can change it in pytest configuration file as described in the documentation.
# pytest.ini
[pytest]
log_file_level = INFO
log_file_format = %(name)s [%(levelname)s]: %(message)
My first stackoverflow answer!
I found the answer i was looking for.
I was able to achieve it using the function scoped fixture like this:
#pytest.fixture(scope="function")
def log(request):
test_path = request.node.parent.name.strip(".py")
test_name = request.node.name
node_id = request.node.nodeid
log_file_path = '%s/%s' % (logs_dir, test_path)
if not os.path.exists(log_file_path):
os.makedirs(log_file_path)
logger_obj = logger.make_logger(test_name, log_file_path, node_id)
yield logger_obj
handlers = logger_obj.handlers
for handler in handlers:
handler.close()
logger_obj.removeHandler(handler)
In newer pytest version this can be achieved with set_log_path.
#pytest.fixture
def manage_logs(request, autouse=True):
"""Set log file name same as test name"""
request.config.pluginmanager.get_plugin("logging-plugin")\
.set_log_path(os.path.join('log', request.node.name + '.log'))

error in creating new hook method in PostgreSQL

I have written a plugin module for PostgreSQL for my academic requirements(version: PostgreSQL9.3.4) I am using hooks to influence planner behaviour in this plugin module. I am able to use join_search_hook, planner_hook successfully. But I wanted to create a new hook for a method which is not already having a hook defined.
I wanted to define a hook for
void set_baserel_size_estimates(PlannerInfo *root, RelOptInfo *rel)
method in costsize.c
I declared a hook in optimizer/paths.h
typedef void (*size_estimates_hook_type) (PlannerInfo *root, RelOptInfo *rels);
extern PGDLLIMPORT size_estimates_hook_type size_estimates_hook;
and initialized it at top of allpaths.c
size_estimates_hook_type size_estimates_hook = NULL;
I added this check in allpaths.c in a method to decide whether to invoke hooked method or not.
if (size_estimates_hook)
(*size_estimates_hook) (root, rel);
else
set_baserel_size_estimates(root, rel);
Now coming to plugin module code,
static join_search_hook_type prev_join_search = NULL;
static size_estimates_hook_type prev_size_estimates = NULL;
The first line compiles fine, but second line gives error
"error: unknown type name ‘size_estimates_hook_type’"
Am i missing some step in defining a new hook method?
note: Plugin is compiled using a dedicated Makefile.
Using the following makefile for compiling the plugin module solved the issue.
MODULES = module_name
ifdef USE_PGXS
PG_CONFIG = <path to /backend/bin/pg_config>
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
else
subdir = contrib/module_name
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif

Is there any hook like "pre vagrant up"?

I'm trying to automate my development boxes with vagrant. I need to share the vagrant setup with other developers, so we need to be sure that some boundary conditions are fullfilled before the normal vagrant up process is started.
Is there any hook (like in git, pre-commit or other pre-* scripts) in vagrant? The provision scripts are much too late.
My current setup looks like this:
Vagrantfile
vagrant-templates/
vagrant-templates/apache.conf
vagrant-templates/...
sub-project1/
sub-project2/
I need to be sure, that sub-project{1..n} exists and if not, there should be a error message.
I would prefer a bash-like solution, but I'm open minded for other solutions.
You could give a try to this Vagrant plugin I've written:
https://github.com/emyl/vagrant-triggers
Once installed, you could put in your Vagrantfile something like:
config.trigger.before :up, :execute => "..."
One option is to put the logic straight into Vagrantfile. Then it gets executed on all vagrant commands in the project. For example something like this:
def ensure_sub_project(name)
if !File.exists?(File.expand_path("../#{name}", __FILE__))
# you could raise or do other ruby magic, or shell out (for a bash script)
system('clone-the-project.sh', name)
end
end
ensure_sub_project('some-project')
ensure_sub_project('other-project')
Vagrant.configure('2') do |config|
# ...
end
It's possible to write your own plugin for vagrant and use action_hook on the machine_action_up, something like:
require 'vagrant-YOURPLUGINNAME/YOURACTIONCLASS'
module VagrantPlugins
module YOURPLUGINNAME
class Plugin < Vagrant.plugin('2')
name 'YOURPLUGINNAME'
description <<-DESC
Some description of your plugin
DESC
config(:YOURPLUGINNAME) do
require_relative 'config'
Config
end
action_hook(:YOURPLUGINNAME, :machine_action_up) do |hook|
hook.prepend(YOURACTIONCLASS.METHOD)
end
end
end
end
Another plugin to check out is vagrant-host-shell which is only run when provisioning the box. Just add it before other provisoners in Vagrantfile:
config.vm.provision :host_shell do |shell|
shell.inline = './clone-projects.sh'
shell.abort_on_nonzero = true
end
.. adding to tmatilai's answer, you can add something like this:
case ARGV[0]
when "provision", "up"
system "./prepare.sh"
else
# do nothing
end
into your Vagrantfile so that it will only be run on specific commands.