What's the best way to modularise reusable functionality across Flutter apps? - flutter

TLDR: When following clean architecture, when should a reusable piece of functionality be reused across different apps via a module vs a template, and how does one decide on the interface of a module?
Background
I'm currently writing some packages (for personal use when freelancing) for common functionality that can be reused across multiple Flutter apps and wondering what's a good way to organise them. With my apps I follow the clean architecture guidelines, splitting an app by features, with each feature consisting of data, domain and presentation layers:
|--> lib/
|
|--> feature_a/
| |
| |--> data/
| | |
| | |--> data_sources/
| | |
| | |--> repository_implementations/
| | |
| |--> domain/
| | |
| | |--> repository_contracts/
| | |
| | |--> entities/
| | |
| | |--> use_cases/
| | |
| |--> presentation/
| | |
| | |--> blocs/
| | |
| | |--> screens/
| | |
| | |--> widgets/
| | |
|--> feature_b
| |
| |--> ...
Example
If we take the user authentication feature, for example, I know that:
The entire domain layer, as well as the bloc, will be the same across most apps (email and password validation, authentication/login blocs, etc.)
The data layer will change depending on the backend/database (different providers = different calls)
The screens/widgets will change with different UI's (different apps will have different login and onboarding pages)
Current Approach
My thinking is to write something like a single backend-agnostic "core_auth_kit" package, which contains the domain and bloc, and one package for each backend service I might use, e.g. "firebase_auth_kit", "mongodb_auth_kit", etc. Each backend-specific package will use the "core_auth_kit" as the outward-facing interface.
Here's how I plan on using this. If I'm writing a simple Firebase Flutter app, I will simply import the "firebase_auth_kit" package, and instantiate its auth_bloc at the root of the app inside a MultiBlocProvider, showing the login page if the state is "unauthenticated" and the home page if it's "authenticated".
Questions
What is the standard practice for deciding on the boundary of a module? i.e. is this approach of using the "highest common layer" (bloc in the authentication example) the way to go?
When should a reusable piece of functionality be extracted as a template vs a module (is my example a good candidate for a module, or should it be a template instead)?

Related

IntelliJ Scala: import works in test folder but not in main folder

I have an IntelliJ project in scala with the following directory structure (I've renamed files/directories for simplicity):
project
|
+--src
| |
| +--main
| | |
| | +--scala
| | |
| | +--'X'
| | |
| | +--'Y.scala'
| +--test
| |
| +--scala
| |
| +--'X'
| |
| +--'YSuite.scala'
|
+--build.sbt
The issue I'm having is that I'm able to import things in the YSuite.scala file that I'm not able to in YSuite.scala - specifically, the scala.collections.parallel packages. I just have no idea how or why I can import in the test file, but not in the parallel application file. I need them in the main file for implementation. Can someone point me in the right direction?
Screenshots are of the Y.scala file, YSuite.scala file, as well as the build.sbt file, if they help at all.
As can be seen, the red text indicates that I wasn't able to import it in Y.scala - when I hover over it with my mouse, it simply says cannot resolve symbol parallel. However, I've run the test file with some implementation of the parallel package, which runs with no problems.
Y.scala
YSuite.scala
build.sbt
a solution that seems to have worked for me:
step 1: File -> Invalidate Caches / Restart
step 2: build again/spin up sbt

How to configure kubernetes for gateway aggregation pattern?

I would like to configure a gateway to aggregate multiple individual requests into a single request as this link. However, my use case allows user to create additional services.
A user submit a request:
POST http://gatway_ip/cards/foo
The diagram as follows:
+------------------+ +-----------------+ +-----------------+
| | | | | |
| transactions | | User info | | dynamic info |
| | | | | |
+------------------+ +-----------------+ +-----------------+
| | |
+----------+ +--------+ |
| | |
| | |
+----v-----v---+ |
| | |
| /cards/foo <----------------------------+
| |
+--------------+
|
|
|
+
User
User can start/stop dynamic info on demand. The gateway merges json response from various services. For example:
transactions:
{"amount": 4000}
user info:
{ "name": "foo" }
dynamic info:
{ "wifeName": "bar" }
Gateway responses is:
{
"amount": 4000,
"name": "foo",
"wifeName": "bar"
}
As far as I know:
The sample solution on Microsoft website defines a fixed backend.
Kubernetes ingress only allows routing for incoming requests.
Is there any solution for a gateway aggregation with dynamic back-end ?
Edited
Work around 1
Refer to NVIDIA configurations for nginx auto-reload, we can take advantages of of Kubernetes ConfigMap, the steps as follows:
Create a backend.json configuration which is loaded by a lua on event init_by_lua* (* is block or file)
Configure ConfigMap to backend.json and use inotify for monitor ConfigMap changes
Provides a API which sends request to Kubernetes ConfigMap API for user to change configuration. Thus, nginx gateway will auto-reload
However, this link claims that inotify will not work because shared storage was a fuse filesystem

Module outside application folder in zend

I have directory structure like this
Application
Config
application.ini
Controllers
modules
default
admin
Bootstrap.php
Install
Controllers
views
Bootstrap.php
index.php
I want Install/Bootstrap.php to run first.
How and where to define such configuration?
How to define route for Install module?
I created an installer for one of my reusable ZF sites. I think you are going about it wrong.
This is how I accomplished it:
I actually have 2 different Zend applications. One is strictly for installation, one is my application that needs one time setup.
They both share one library directory
the .htaccess in the webroot by default points the user to install.php (which calls up the bootstrap for the installation application.
the last step of the installation application is to modify the .htaccess to send all future requests to index.php (the actual application), and deny access to all for the install.php file
My Directory Structure
|-application
| |-modules
| | |-default
| | |-admin
| |-config
| |-Bootstrap.php
|-public (webroot)
| |-index.php
| |-install.php
| |-.htaccess
|-private
| |-installer
| | |-application
| | | |-modules
| | | | |-default
| | | |-config
| | | |-Bootstrap.php
|-library
| |-Zend

Emacs source code navigation features

I am working on a large c++ project. I am working with emacs for the last six months.
I have try to configure CEDET so as to be able to navigate easily but i have found some problems.
1.- Sometimes semantic does not find some symbols and sometimes he don't ... i do not know clearly which files is semantic indexing.
I have tried to use EDE (following the instructions in this paper http://alexott.net/en/writings/emacs-devenv/EmacsCedet.html), but i have found some problems also...
I have multiple version ( Releases) of the same project, each one in its own folder. How can i tell emacs which project i am working with?
How can i tell ede where to look for my header files? Can I specify just a root directory and semantic will search for header files in all the subdirectories?
2.- I was working with vim+cscope some time ago and i remember there was a way to navigate back in the stack of symbols (Ctrl-t). Is there anything like this in emacs?
P.D.> Some data to make the question more clear.
I have multiple releases of the same project.
Each one has its own root directory.
Each project has multiple modules each one inside a subdirectory.
There are headers file in each module.
/home/user/
|
\Release-001
| |
| \makefile
| \ Module-001
| | |
| | \makefile
| | \subdir-001
| | | \header-001.h
| | | \header-002.h
| | \subdir-002
| | | \header-003.h
| \ Module-002
| | |
| | \makefile
| | \subdir-003
| | | \header-004.h
| | | \header-005.h
| | \subdir-004
| | | \header-006.h
|
\Release-002
| |
| \makefile
| \ Module-001
| | |
| | \makefile
| | \subdir-001
| | | \header-001.h
| | | \header-002.h
| | \subdir-002
| | | \header-003.h
| \ Module-002
| | |
| | \makefile
| | \subdir-003
| | | \header-004.h
| | | \header-005.h
| | \subdir-004
| | | \header-006.h
This is the configuration about EDE i have in my .emacs
;; Cedet load commands
(add-to-list 'load-path "~/emacs-dir/cedet/cedet")
(load-file "~/emacs-dir/cedet/cedet/common/cedet.el")
;; EDE: activating mode.
(global-ede-mode t)
;; Projects definition
(ede-cpp-root-project "Release-001"
:name "Release-001"
:file "~/Release-001/makefile"
:include-path '("/"
)
:system-include-path '("~/exp/include")
:spp-table '(("SUSE9" . "")
)
)
(ede-cpp-root-project "Release-002"
:name "Release-002"
:file "~/Release-002/makefile"
:include-path '("/"
)
:system-include-path '("~/exp/include")
:spp-table '(("SUSE9" . "")
)
)
Just to let you know ... I am working with the console version ( -nw) of emacs.
Your configuration is basically correct, except for the :include-path for your projects.
If a given source file says:
#include "Module-001/subdir-002/header-003.h"
then it is ok. If the include says:
#include "subdir-002/header-003.h"
then your :include-path should have
:include-path '("/Module-001" )
in it.
As for which things does semantic index, it will index your current file, and all includes it can find. Use the semantic-decoration mode to see which headers EDE has found for you to determine if your configuration is accurate.
It will also index all files in the same directory as the one you are editing, but only in idle time, so if you don't let Emacs be idle, it won't get around to it.
You can speed the indexing operations up if you use CScope as Bozhidar suggests. You can then enable the CScope support in both EDE and the Semantic database. The inclusion of CScope support in Semantic DB is recent, however, so you would need the CVS version of CEDET. That would make sure the whole thing was indexed.
To navigate backward, investigate the help for semantic-mru-bookmark-mode. This tracks your progress through your files on a named location basis that is quite handy and always works.
I had used in the past the Emacs Code Browser when working on C++ projects and I found it very satisfactory - in a addition to great files and code structure navigation you get excellent VCS integration(different icons according to current state of a file in the project). In conjunction with ECB I used cscope for Emacs, since you mentioned in for vim, you'll probably want to use it in Emacs as well.
Alternatively if you want a simpler solution you might have a look at Emacs Nav. It supports some fancy stuff as well and has no dependency to semantic and speedbar - you'll only have to use etags/ctags to index your project.

Best Practices for Project Feature Sub-Modules with Mercurial and Eclipse?

I have a couple of ANT projects for several different clients; the directory structure I have for my projects looks like this:
L___standard_workspace
L___.hg
L___validation_commons-sub-proj <- JS Library/Module
| L___java
| | L___jar
| L___old_stuff
| L___src
| | L___css
| | L___js
| | L___validation_commons
| L___src-test
| L___js
L___v_file_attachment-sub-proj <- JS Library/Module
| L___java
| | L___jar
| L___src
| | L___css
| | L___js
| L___src-test
| L___js
L___z_business_logic-sub-proj <- JS Library/Module
| L___java
| | L___jar
| L___src
| L___css
| L___js
L____master-proj <- Master web-deployment module where js libraries are compiled to.
L___docs
L___java
| L___jar
| L___src
| L___AntTasks
| L___build
| | L___classes
| | L___com
| | L___company
| L___dist
| L___nbproject
| | L___private
| L___src
| L___com
| L___company
L___remoteConfig
L___src
| L___css
| | L___blueprint
| | | L___plugins
| | | | L___buttons
| | | | | L___icons
| | | | L___fancy-type
| | | | L___link-icons
| | | | | L___icons
| | | | L___rtl
| | | L___src
| | L___jsmvc
| L___img
| | L___background-shadows
| | L___banners
| | L___menu
| L___js
| | L___approve
| | L___cart
| | L___confirm
| | L___history
| | L___jsmvc
| | L___mixed
| | L___office
| L___stylesheets
| L___swf
L___src-standard
Within the working copy the modules compile the sub-project into a single Javascript file that is placed in the Javascript directory of the master project.
For example, the directories:
validation_commons-sub-proj
v_file_attachment-sub-proj
z_business_logic-sub-proj
...all are combined and minified (sort of like compiled) into a different Javascript filename in the _master-proj/js directory; and in the final step the _master-proj is compiled to be deployed to the server.
Now in regards to the way I'd like to set this up with hg, what I'd like to be able to do is clone the master project and its sub-projects from their own base-line repositories into a client's working-copy, so that modules can be added (using hg) to a particular customer's working copy.
Additionally however, when I do make some changes to/fix bugs in one customer's working copy, I would like to be able to optionally push the changes/bug fixes back to the master project/sub-project's base-line repository, for purposes of eventually pulling the changes/fixes into other customer's working copies that might contain the same bugs that need to be fixed.
In this way I will be able to utilize the same bug fixes across different clients.
However...I am uncertain of the best way to do this using hg and Eclipse.
I read here that you can use hg's Convert Extension to split a sub-directory into a separate project using the --filemap option.
However, I'm still a little bit confused as to if it would be better to use the Convert Extension or if it would be better to just house each of the modules in their own repository and check them out into a single workspace for each client.
Yep, it looks like subrepos are what you are looking for, but I think maybe that is the right answer for the wrong question and I strongly suspect that you'll run into similar issues that occur when using svn:externals
Instead I would recommend that you "publish" your combined and minified JS files to an artefact repository and use a dependency manager such as Ivy to pull specific versions of your artefacts into your master project. This approach give you far greater control over the sub-project versions your master project uses.
If you need to make bug fixes to a sub-project for a particular client, you can just make the fixes on the mainline for that sub-project, publish a new version (ideally via an automated build pipeline) and update their master project to use the new version. Oh, you wanted to test the new version with the their master project before publishing? In that case, before you push your fix, combine and minify your sub-project locally, publish it to a local repository and have the client's master project pick up that version for your testing.