How can I make a symfony 4 command to be registered only in dev environment and disabled in prod? - command

In my App I have a helper class App\Command\GenerateFixturesCommand that provides a command named my-nice-project:generate-fixtures.
This command consumes a service of my own project named App\Services\CatalogFixtureGenerator that generates 1000 random PDF documents for testing while developing the app.
To do so, this service uses the joshtronic\LoremIpsum class which is required in composer only in dev. LoremIpsum is a third-party library. I require it under composer's require-dev.
So the injection is:
I run my GenerateFixturesCommand.
Before that, the system transparently locates my CatalogFixtureGenerator and to inject it into the command.
Before that, the system transparently locates the LoremIpsum third party service to inject it into my fixture generator service.
All is autowired.
When I deploy to prod and do composer install --no-dev --optimize-autoloader of course the LoremIpsum class is not installed.
But when I clear the cache with APP_ENV=prod php bin/console cache:clear the framework finds the command and cannot inject the autowired dependencies.
[WARNING] Some commands could not be registered:
In CatalogsFixtureGenerator.php line 26:
Class 'joshtronic\LoremIpsum' not found
This my-nice-project:generate-fixtures command is never going to be used in the production server.
Question
How can I "disable" the command in prod?
I mean: How can I tell the framework that the class GenerateFixturesCommand should not be loaded nor its autowired dependencies, and neither of them should be autowired in prod?

Use the isEnabled() method in Command.
For example
public function isEnabled(): bool
{
// disable on prod
if ($this->appKernel->getEnvironment() === 'prod') {
return false;
}
return true;
}

In my last project, I need some commands to work only in dev environment. You use getenv function to achieve this:
# src/Command/SomeCommand.php
...
public function __construct()
{
parent::__construct();
if (getenv("APP_ENV") !== "dev") {
exit('This command should work only "dev" environment.');
}
}
This will do the trick.
Code fun :)

The solution #gusDeCooL suggests doesn't work with lazy-loaded commands (at least not for me).
I ended up implementing the isEnabled() method anyway, but then I added a guard in execute():
<?php
namespace App\Command;
use Symfony\Component\Console\Attribute\AsCommand;
use Symfony\Component\Console\Command\Command;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use Symfony\Component\Console\Style\SymfonyStyle;
#[AsCommand(
name: 'command:name',
description: 'Some description',
)]
class CommandName extends Command
{
public function isEnabled(): bool
{
return 'dev' === getenv('APP_ENV');
}
protected function execute(InputInterface $input, OutputInterface $output): int
{
$io = new SymfonyStyle($input, $output);
if (!$this->isEnabled()) {
$io->error('This command is only available in `dev` environment.');
exit(1);
}
// the rest
}
}

Related

CakePHP 4.x - bin/cake.php get deleted after running command

In an cakephp app, I need to periodically retrieve data from an api. I also need the admin to be able to launch the update manually.
So I created a model accessing the data; this way I can use it in a controller and in the command/cronjob.
In the controller, no problem, it's running fine.
The problem is when I run the task in the bash : the bin/cake.php file get deleted and I can't do anything (bake, run task, etc).
Here's my code (simplified version on what's failling):
// IN Table/ApisTable.php
namespace App\Model\Table;
use Cake\ORM\Table;
use Cake\ORM\TableRegistry;
use Cake\Http\Client;
class ApisTable extends Table{
public function getUrl(){
$http = new Client();
$response = $http->get('http://api.tvmaze.com/shows/1');
return $response->getStringBody();
}
}
// IN Entity/Api.php
namespace App\Model\Entity;
use Cake\ORM\Entity;
class Api extends Entity{
}
//IN Command/TestCommand.php
namespace App\Command;
use Cake\Command\Command;
use Cake\Console\Arguments;
use Cake\Console\ConsoleIo;
class TestCommand extends Command {
public function execute(Arguments $a, ConsoleIo $io){
$this->loadModel('Apis');
$data = $this->Apis->getUrl();
$io->out($data);
}
}
The file is deleted when I run bin/cake test
Any idea ?
Solved :
After trying few thing on another computer, I solved it. I'll let the subject here, if someone else need it.
It was my antivirus who placed the file in quarantine when reaching another url.

EF Core Migrations in Azure Function startup

According to https://learn.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection the service provider should not be used until AFTER the startup has completed running. Indeed, if I try to get a registered service it will fail.
Example:
[assembly: FunctionsStartup(typeof(Startup))]
namespace Fx {
public sealed class Startup : FunctionsStartup {
public override void Configure(IFunctionsHostBuilder builder) {
var configurationBuilder = new ConfigurationBuilder();
configurationBuilder.AddEnvironmentVariables();
var configuration = configurationBuilder.Build();
builder.Services.AddInfrastructure(configuration);
builder.Services.AddApplication();
var serviceProvider = builder.Services.BuildServiceProvider();
DependencyInjection.AddDatabase(serviceProvider).GetAwaiter().GetResult();
}
}
}
public static class DependencyInjection {
public static async Task AddDatabase(IServiceProvider services) {
using var scope = services.CreateScope();
var serviceProvider = scope.ServiceProvider;
var context = serviceProvider.GetRequiredService<ApplicationDbContext>();
//Error generated here
if (context.Database.IsSqlServer()) {
await context.Database.MigrateAsync();
}
await ApplicationDbContextSeed.SeedSamplePersonnelDataAsync(context);
}
public static IServiceCollection AddInfrastructure(
this IServiceCollection services,
IConfiguration configuration) {
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(configuration.GetConnectionString("DefaultConnection"),
b => b.MigrationsAssembly(typeof(ApplicationDbContext).Assembly.FullName)));
services.AddScoped<IApplicationDbContext>(provider => provider.GetService<ApplicationDbContext>());
return services;
}
}
This produces the following error
Microsoft.EntityFrameworkCore: No database provider has been configured for this DbContext. A provider can be configured by overriding the DbContext.OnConfiguring method or by using AddDbContext on the application service provider. If AddDbContext is used, then also ensure that your DbContext type accepts a DbContextOptions<TContext> object in its constructor and passes it to the base constructor for DbContext.
Is there a good option for migrating and seeding during startup?
The easiest way I found to run code after startup was by registering a custom IWebJobsStartup by using the WebJobsStartupAttribute (the FunctionsStartupAttribute actually also inherits from this attribute). In the WebJobsStartup class you'll need to register your extension using the AddExtension where you are able to use dependency injection and seed your database. My code:
[assembly: WebJobsStartup(typeof(DbInitializationService), "DbSeeder")]
namespace Our.Database.Seeder
{
public class DbInitializationService : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
builder.AddExtension<DbSeedConfigProvider>();
}
}
[Extension("DbSeed")]
internal class DbSeedConfigProvider : IExtensionConfigProvider
{
private readonly IServiceScopeFactory _scopeFactory;
public DbSeedConfigProvider(IServiceScopeFactory scopeFactory)
{
_scopeFactory = scopeFactory;
}
public void Initialize(ExtensionConfigContext context)
{
using var scope = _scopeFactory.CreateScope();
var dbContext = scope.ServiceProvider.GetService<YourDbContext>();
dbContext.Database.EnsureCreated();
// Further DB seeding, etc.
}
}
}
According to your code, I assume that you're building something aligned to the CleanArchitecture Repository on Github. https://github.com/jasontaylordev/CleanArchitecture
The main difference between this repo and your apporach, is that you're obviously not using ASP.NET, which is not a problem at all, but requires a little bit more configuration work.
The article already mentioned (https://markheath.net/post/ef-core-di-azure-functions) refers another blogpost (https://dev.to/azure/using-entity-framework-with-azure-functions-50aa), which briefly explains that EntityFramework Migrations are not capable of auto-discovering your migrations in an Azure Function. Therefore, you need to implement an instance of IDesignTimeDbContextFactory. I also stumbled upon it in the microsoft docs:
https://learn.microsoft.com/en-us/ef/core/cli/dbcontext-creation?tabs=dotnet-core-cli#from-a-design-time-factory
You could for example place it inside your Infrastructure\Persistence\Configurations folder. (Once again, I'm only assuming that you're following the CleanArchitecture repo structure)
DI in AZURE Functions
Caveats
A series of registration steps run before and after the runtime processes the startup class. Therefore, keep in mind the following items:
The startup class is meant for only setup and registration. Avoid using services registered at startup during the startup process. For instance, don't try to log a message in a logger that is being registered during startup. This point of the registration process is too early for your services to be available for use. After the Configure method is run, the Functions runtime continues to register additional dependencies, which can affect how your services operate.
The dependency injection container only holds explicitly registered types. The only services available as injectable types are what are setup in the Configure method. As a result, Functions-specific types like BindingContext and ExecutionContext aren't available during setup or as injectable types

How to use an mvvmcross plugin such as the file plugin

I'm using mvvmcross version 6.4.1 to develop an app for IOS, Android, and WPF.
I've searched all over for my to use plugins. There seems to be no code examples. The documentation said to install the nuget in both my core and ui application projects. Which I did. Is there any special IOC registration/setup/or loading that needs to be done before I can use the plugin and how do I go about using the plugin? Do they get injected in the constructor or Do I have to manually pull them from the IOC container or new () them up.
I've installed nuget for the File plugin into my WPF UI and Core project. I added the IMvxFileStore to one of my core project's service constructor thinking it automagically gets added to the DI container, but it doesn't seem to get injected.
namespace My.Core.Project.Services
{
public class SomeService : ISomeService
{
private IMvxFileStore mvxFileStore;
public SomeService(IMvxFileStore mvxFileStore)
{
this.mvxFileStore = mvxFileStore;
}
public string SomeMethod(string somePath)
{
mvxFileStore.TryReadTextFile(somePath, out string content);
return content;
}
}
}
App.xaml.cs
using MvvmCross.Core;
using MvvmCross.Platforms.Wpf.Views;
...
public partial class App : MvxApplicatin
{
protected override void RegisterSetup()
{
this.RegisterSetupType<Setup<Core.App>>();
}
}
App.cs
using MvvmCross;
using MvvmCross.ViewModels;
using My.Core.Project.Services;
public class App: MvxApplication
{
public override void Initialize()
{
Mvx.IocProvider.RegisterType<ISomeService, SomeService>();
RegisterCustomAppStart<AppStart>();
}
}
AppStart.cs
using MvvmCross.Exceptions;
using MvvmCross.Navigation;
using MvvmCross.ViewModels;
using My.Core.Project.ViewModels;
using System;
using System.Threading.Tasks;
....
public class AppStart : MvxAppStart
{
public AppStart(IMvxApplication application, IMvxNavigationService navigationService) : base(application, navigationService)
{}
public override Task NavigateToFirstViewModel(object hint = null)
{
try {
return NavigationService.Navigate<FirstPageViewModel>();
} catch {
throw e.MvxWrap("Some error message {0}", typeof(FirstPageViewModel).Name);
}
}
}
Setup.cs in WPF project
using MvvmCross;
using MvvmCross.Base;
using MvvmCross.Platforms.Wpf.Core;
using MvvmCross.Plugin.File;
using MvvmCross.Plugin.Json;
using MvvmCross.ViewModels;
using My.Wpf.Project.Services;
...
public class Setup<T> : MvxWpfSetup
{
public Setup() : base() {}
protected override IMvxApplication CreateApp()
{
return new Core.App();
}
protected override void InitializeFirstChange()
{
base.InitializeFirstChange();
Mvx.IocProvider.RegisterType<ISomeWpfSpecificService>(() => new SomeWpfSpecificService());
}
protected override void InitializeLastChange()
{
base.InitializeLastChange();
}
}
I'm expecting my service to load but instead, I get the error message
MvxIoCResolveException: Failed to resolve parameter for parameter mvxJsonConverter of type IMvxJsonConverter
NOTE: I get the same error message for both File and Json plugin, The plugin that gets listed first in the constructor gets the error message when the app trys to load.
Am I properly using or loading the plugin?
UPDATE: I manually registered the Plugins in the UI Setup.cs and it is working but I am not sure if this is the proper way to do it.
WPF UI project Setup.cs
using MvvmCross;
using MvvmCross.Base;
using MvvmCross.Platforms.Wpf.Core;
using MvvmCross.Plugin.File;
using MvvmCross.Plugin.Json;
using MvvmCross.ViewModels;
using My.Wpf.Project.Services;
...
public class Setup<T> : MvxWpfSetup
{
public Setup() : base() {}
protected override IMvxApplication CreateApp()
{
return new Core.App();
}
protected override void InitializeFirstChange()
{
base.InitializeFirstChange();
Mvx.IocProvider.RegisterType<ISomeWpfSpecificService>(() => new SomeWpfSpecificService());
Mvx.IoCProvider.RegisterType<IMvxFileStore, MvxFileStoreBase>();
Mvx.IoCProvider.RegisterType<IMvxJsonConverter, MvxJsonConverter>();
}
protected override void InitializeLastChange()
{
base.InitializeLastChange();
}
}
Yes you are using the plugin properly and I think that for now your solution to manually register your plugin is viable.
The root of the problem is located in the MvxSetup class. This class contains the method LoadPlugins which is responsible for loading the MvvmCross plugins which are referenced by your UI project. This is how LoadPlugins determines what plugins to load:
Get all assemblies that have been loaded into the execution context of the application domain.
Find types within these assemblies which contain the MvxPluginAttribute.
Now the problem occurs in step 1. In a .NET framework project, by default, your referenced assemblies won't be loaded into the execution context until you actually use them in your code. So if you don't use something from your MvvmCross.Plugin.File reference in your UI project it won't be loaded into your execution context and it won't be found in step 1 and thus it won't be registered by LoadPlugins. (good read: when does a .NET assembly Dependency get loaded)
One way I have tested this is by doing this:
protected override void InitializeFirstChance()
{
// Because a type of the MvvmCross.Plugin.File.Platforms.Wpf reference is
// used here the assembly will now get loaded in the execution context
var throwaway = typeof(Plugin);
base.InitializeFirstChance();
}
With the above code you don't have to manually register the Plugin.
There has been a pull request to fix this in the MvvmCross framework but this has been reverted later since it caused problems on other platforms.
In other platforms the plugin assemblies will get loaded into the execution context without any tricks so I would say updating the MvvmCross documentation stating you have to register your plugin manually for WPF would be useful for other developers in the future.

Nunit3 how to change testcase name based on parameters passed from TestFixtureSource

I'm using NUnit 3.0 and TestFixtureSource to run test cases inside a fixture multiple times with different parameters/configurations (I do want to do this at TestFixture level). Simple example:
[TestFixtureSource(typeof (ConfigurationProvider))]
public class Fixture
{
public Fixture(Configuration configuration)
{
_configuration = configuration;
}
private Configuration _configuration;
[Test]
public void Test()
{
//do something with _configuration
Assert.Fail();
}
}
Let's say Test() fails for one of the configurations and succeeds for another. In the run report file and in Visual Studio's Test Explorer the name for both the failed and the succeeded runs will be displayed as just Test(), which doesn't tell me anything about which setup caused issues.
Is there a way to affect the test cases names in this situation (i.e. prefix its name per fixture run/configuration)? As a workaround I'm currently printing to the results output before each test case fires but I would rather avoid doing that.
Since NUnit 3.0 is in beta and this feature is fairly new I wasn't able to find anything in the docs. I found TestCaseData but I don't think it's tailored to be used with fixtures just yet (it's designed for test cases).
I can't find a way to change the testname, but it should not be neccessary, because NUnit3 constructs the testname by including a description of the testfixture.
The example class Fixture from the question can be used unchanged if the Configuration and ConfigurationProvider has an implementation like this:
public class Configuration
{
public string Description { get; }
public Configuration(string description)
{
Description = description;
}
public override string ToString()
{
return Description;
}
}
public class ConfigurationProvider : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return new Configuration("Foo");
yield return new Configuration("Bar");
yield return new Configuration("Baz");
}
}
The 'trick' is to make sure the constructor-parameter to the fixture is a string or has a ToString-method that gives a sensible description of the fixture.
If you are using NUnit 3 Test Adapter in Visual Studio, then the testfixtures will be displayed as Fixture(Foo), Fixture(Bar) and Fixture(Baz) so you can easily distinguish between their tests. The xml-output from nunit3-console.exe also uses descriptive names, fx: fullname=MyTests.Fixture(Bar).Test
<test-case id="0-1003" name="Test" fullname="MyTests.Fixture(Bar).Test" methodname="Test" classname="MyTests.Fixture" runstate="Runnable" result="Failed" ... >
<failure>
<message><![CDATA[]]></message>
<stack-trace><![CDATA[at MyTests.Fixture.Test() in ... ]]></stack-trace>
</failure>
...
</test-case>
One way to perform such actions is to have find and replace tokens in source code and dynamically build test libraries before execution using command line msbuild. High level steps are
Define test case names as sometest_TOKEN in source then using command line tools like fnr.exe replce _TOKEN with whatever you like. For example sometest_build2145.
Compile the dll with using msbuild for example msbuild /t:REbuild mytestproj.sln. Thereafter execute all test cases in mytestproj.dll.

How to work with dependency injection within SOA?

I'm currently using SOA, I've a bunch of Service, (ArticleService, CommentService, UserService, etc..)
I also have a ConfigurationService which is filled from an XML configuration file.
I'm using Zend Framework.
THis configuration service is needed in some of my service, and I'm using dependency injection, is it a good practice, to add ConfigurationService in constructor of most my Service to be able to fetch global configuration?
Thank you for your feedbacks.
I would say, no, don't pass the config container - neither as a service nor as an array nor a Zend_Config instance - in the constructor of your other services. I would keep the injection (whether by constructor or by setter) for those services focused on the actual objects/collaborators/data they actually need.
So, for example, an ArticleService might depend upon an ArticleRepository interface/object or on an ArticleMapper or on a db adapter. Let the constructor/setter signatures for the ArticleService reflect what it truly needs.
Instead, what I would do is during Bootstrap, create some kind of factory object - perhaps as an application resource - that accepts in its constructor your config data/object/service (or even better, the bootstrap instance itself, from which you could get, not just your config data, but also any application resources, like a db adapter, that were created during the bootstrap process). Then write methods on your factory object that create/deliver the other services you need. Internally, the factory maintains a registry of already created services so that it can lazy-create instances where required.
A snippet of what I have in mind might be as follows:
Bootstrap snippet:
class Bootstrap extends Zend_Application_Bootstrap_Bootstrap
{
protected function _initFactory()
{
$factory = new My_Factory($this);
return $factory;
}
}
Then the factory:
class My_Factory
{
protected $_registry;
protected $_bootstrap;
public function __constructor($bootstrap)
{
$this->_bootstrap = $bootstrap;
}
public function getDbAdapter()
{
if (!isset($this->_registry['dbAdapter']){
$this->_bootstrap->bootstrap('db'); // probably using app resource
$this->_registry['dbAdapter'] = $This->_bootstrap->getResource('db');
}
return $this->_registry['dbAdapter'];
}
public function getArticleService()
{
if (!isset($this->_registry['articleService']){
$dbAdapter = $this->getDbAdapter();
$this->_registry['articleService'] = new My_ArticleService($dbAdapter);
}
return $this->_registry['articleService'];
}
public function getTwitterService()
{
if (!isset($this->_registry['twitterService']){
$options = $this->_bootstrap->getOptions();
$user = $options['twitter']['user'];
$pass = $options['twitter']['pass'];
$this->_registry['twitterService'] = new My_TwitterService($user, $pass);
}
return $this->_registry['twitterService'];
}
}
Then in a controller, you could grab an ArticleService instance:
class SomeController extends Zend_Controller_Action
{
protected $_factory;
public function init()
{
$this->_factory = $this->getInvokeArg('bootstrap')->getResource('factory');
}
public function someAction()
{
$articleService = $this->_factory->getArticleService();
$this->view->articles = $articleService->getRecentArticles(5); // for example
}
}
The upshot here is that each service explicitly identifies the collaborators it needs and the factory is a single place that takes care of creating/injecting all those collaborators.
Finally, I confess that I am just spitballing here. To me, this is essentially a rudimentary dependency injection container; in that sense, using a fully-featured DIC - perhaps the Symfony DIC or the new Zend\Di package in ZF2 - might be better. But after many months of struggling with all the best-practice recommendations to inject your dependencies, this is what I have come up with. If it's goofy or just plain wrong, please (please!) straighten me out. ;-)