I would like to configure a gateway to aggregate multiple individual requests into a single request as this link. However, my use case allows user to create additional services.
A user submit a request:
POST http://gatway_ip/cards/foo
The diagram as follows:
+------------------+ +-----------------+ +-----------------+
| | | | | |
| transactions | | User info | | dynamic info |
| | | | | |
+------------------+ +-----------------+ +-----------------+
| | |
+----------+ +--------+ |
| | |
| | |
+----v-----v---+ |
| | |
| /cards/foo <----------------------------+
| |
+--------------+
|
|
|
+
User
User can start/stop dynamic info on demand. The gateway merges json response from various services. For example:
transactions:
{"amount": 4000}
user info:
{ "name": "foo" }
dynamic info:
{ "wifeName": "bar" }
Gateway responses is:
{
"amount": 4000,
"name": "foo",
"wifeName": "bar"
}
As far as I know:
The sample solution on Microsoft website defines a fixed backend.
Kubernetes ingress only allows routing for incoming requests.
Is there any solution for a gateway aggregation with dynamic back-end ?
Edited
Work around 1
Refer to NVIDIA configurations for nginx auto-reload, we can take advantages of of Kubernetes ConfigMap, the steps as follows:
Create a backend.json configuration which is loaded by a lua on event init_by_lua* (* is block or file)
Configure ConfigMap to backend.json and use inotify for monitor ConfigMap changes
Provides a API which sends request to Kubernetes ConfigMap API for user to change configuration. Thus, nginx gateway will auto-reload
However, this link claims that inotify will not work because shared storage was a fuse filesystem
Related
I was trying to implement code through flyway:
create index concurrently if not exists api_client_system_role_idx2 on profile.api_client_system_role (api_client_id);
create index concurrently if not exists api_client_system_role_idx3 on profile.api_client_system_role (role_type_id);
create index concurrently if not exists api_key_idx2 on profile.api_key (api_client_id);
However flyway sessions were blocking each other and script is in "pending" state.
| Versioned | 20.1 | add email verification table | SQL | 2021-11-01 21:55:52 | Success |
| Versioned | 21.1 | create role for doc api | SQL | 2021-11-01 21:55:52 | Success |
| Versioned | 22 | create indexes for profile | SQL | 2022-10-21 10:23:41 | Success |
| Versioned | 23 | test flyway | SQL | | Pending |
+-----------+---------+----------------------------------------------+--------+---------------------+---------+
Flyway: Flyway Community Edition 9.3.1 by Redgate
Database: Postgresql 14.4
Can you please advice how to properly implement creating indexes concurrently in postgresql?
I've tried simply to kill blocking session and let the script to continue, however then implementation failed and scripts stayed in "Pending" status.
TLDR: When following clean architecture, when should a reusable piece of functionality be reused across different apps via a module vs a template, and how does one decide on the interface of a module?
Background
I'm currently writing some packages (for personal use when freelancing) for common functionality that can be reused across multiple Flutter apps and wondering what's a good way to organise them. With my apps I follow the clean architecture guidelines, splitting an app by features, with each feature consisting of data, domain and presentation layers:
|--> lib/
|
|--> feature_a/
| |
| |--> data/
| | |
| | |--> data_sources/
| | |
| | |--> repository_implementations/
| | |
| |--> domain/
| | |
| | |--> repository_contracts/
| | |
| | |--> entities/
| | |
| | |--> use_cases/
| | |
| |--> presentation/
| | |
| | |--> blocs/
| | |
| | |--> screens/
| | |
| | |--> widgets/
| | |
|--> feature_b
| |
| |--> ...
Example
If we take the user authentication feature, for example, I know that:
The entire domain layer, as well as the bloc, will be the same across most apps (email and password validation, authentication/login blocs, etc.)
The data layer will change depending on the backend/database (different providers = different calls)
The screens/widgets will change with different UI's (different apps will have different login and onboarding pages)
Current Approach
My thinking is to write something like a single backend-agnostic "core_auth_kit" package, which contains the domain and bloc, and one package for each backend service I might use, e.g. "firebase_auth_kit", "mongodb_auth_kit", etc. Each backend-specific package will use the "core_auth_kit" as the outward-facing interface.
Here's how I plan on using this. If I'm writing a simple Firebase Flutter app, I will simply import the "firebase_auth_kit" package, and instantiate its auth_bloc at the root of the app inside a MultiBlocProvider, showing the login page if the state is "unauthenticated" and the home page if it's "authenticated".
Questions
What is the standard practice for deciding on the boundary of a module? i.e. is this approach of using the "highest common layer" (bloc in the authentication example) the way to go?
When should a reusable piece of functionality be extracted as a template vs a module (is my example a good candidate for a module, or should it be a template instead)?
I am building an app that applies a datascience model on a SQL Database, for sensor metrics. For this purpose I chose PipelineDB (based on Postgres) that enables me to build a Continuous View on my metrics and apply the model to each new line.
For now, I just want to observe the metrics I collect through the sensor on a dashboard. The table "metrics" looks like this :
+---------------------+--------+---------+------+-----+
| timestamp | T (°C) | P (bar) | n | ... |
+---------------------+--------+---------+------+-----+
| 2015-12-12 20:00:00 | 20 | 1.13 | 0.9 | |
+---------------------+--------+---------+------+-----+
| 2015-12-13 20:00:00 | 20 | 1.132 | 0.9 | |
+---------------------+--------+---------+------+-----+
| 2015-12-14 20:00:00 | 40 | 1.131 | 0.96 | |
+---------------------+--------+---------+------+-----+
I'd like to build a dashboard in which I could see all my metric evolving through time. Even be able to choose which column to display.
So I found a few tools that could match with my need, which are Grafana or Chronograf for InfluxDB.
But neither of them enable me to plug directly on Postgres and query my table to generate metric-formatted data that is required by these tools.
Do you have any advice on what I should do to use such dashboards with such data ?
A bit late here, but Grafana now supports Postgresql datasources directly: https://grafana.com/docs/features/datasources/postgres. I've used it in several projects and it has been really easy to set up and use.
I am thinking to migrate my website to Google Cloud SQL and I signed up for a free account (D32).
Upon testing on a table with 23k records the performances were very poor so I read that if I move from the free account to a full paid account I would have access to faster CPU and HDD... so I did.
performances are still VERY POOR.
I am running my own MySQL server for years now, upgrading as needed to handle more and more connections and to gain raw speed (needed because of a legacy application). I highly optimize tables, configuration, and heavy use of query cache, etc...
A few pages of our legacy system have over 1.5k of queries per page, currently I was able to push the mysql query time (execution and pulling of the data) down to 3.6seconds for all those queries, meaning that MySQL takes about 0.0024 seconds to execute the queries and return the values.. not the greatest but acceptable for those pages.
I upload a table involved in those many queries to Google Cloud SQL. I notices that the INSERT already takes SECONDS to execute instead than milliseconds.. but I think that it might be the sync vs async setting. I change it to async and the execution time for the insert doesn't feel like it changes. for now not a big problem, I am only testing queries for now.
I run a simple select * FROM <table> and I notice that it takes over 6 seconds.. I think that maybe the query cache needs to build.. i try again and this times it takes 4 seconds (excluding network traffic). I run the same query on my backup server after a restart and with no connections at all, and it takes less than 1 second.. running it again, 0.06 seconds.
Maybe the problem is the cache, too big... let's try a smaller subset
select * from <table> limit 5;
to my server: 0.00 seconds
GCS: 0.04
so I decide to try a dumb select on an empty table, no records at all, just created with only 1 field
to my server: 0.00 seconds
GCS: 0.03
profiling doesn't give any insights except that the query cache is not running on Google Cloud SQL and that the queries execution seems faster but .. is not...
My Server:
mysql> show profile;
+--------------------------------+----------+
| Status | Duration |
+--------------------------------+----------+
| starting | 0.000225 |
| Waiting for query cache lock | 0.000116 |
| init | 0.000115 |
| checking query cache for query | 0.000131 |
| checking permissions | 0.000117 |
| Opening tables | 0.000124 |
| init | 0.000129 |
| System lock | 0.000124 |
| Waiting for query cache lock | 0.000114 |
| System lock | 0.000126 |
| optimizing | 0.000117 |
| statistics | 0.000127 |
| executing | 0.000129 |
| end | 0.000117 |
| query end | 0.000116 |
| closing tables | 0.000120 |
| freeing items | 0.000120 |
| Waiting for query cache lock | 0.000140 |
| freeing items | 0.000228 |
| Waiting for query cache lock | 0.000120 |
| freeing items | 0.000121 |
| storing result in query cache | 0.000116 |
| cleaning up | 0.000124 |
+--------------------------------+----------+
23 rows in set, 1 warning (0.00 sec)
Google Cloud SQL:
mysql> show profile;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000061 |
| checking permissions | 0.000012 |
| Opening tables | 0.000115 |
| System lock | 0.000019 |
| init | 0.000023 |
| optimizing | 0.000008 |
| statistics | 0.000012 |
| preparing | 0.000005 |
| executing | 0.000021 |
| end | 0.000024 |
| query end | 0.000007 |
| closing tables | 0.000030 |
| freeing items | 0.000018 |
| logging slow query | 0.000006 |
| cleaning up | 0.000005 |
+----------------------+----------+
15 rows in set (0.03 sec)
keep in mind that I connect to both server remotely from a server located in VA and my server is located in Texas (even if it should not matter that much).
What am I doing wrong ? why simple queries take this long ? am I missing or not understanding something here ?
As of right now I won't be able to use Google Cloud SQL because a page with 1500 queries will take way too long (circa 45 seconds)
I know this question is old but....
CloudSQL has poor support for MyISAM tables, it's recommend to use InnoDB.
We had poor performance when migrating a legacy app, after reading through the doc's and contacting the paid support, we had to migrate the tables into InnoDB; No query cache was also a killer.
You may also find later on you'll need to tweak the mysql conf via the 'flags' in the google console. An example being 'wait_timeout' is set too high by default (imo.)
Hope this helps someone :)
Query cache is not as yet a feature of Cloud SQL. This may explain the results. However, I recommend closing this question as it is quite broad and doesn't fit the format of a neat and tidy Q&A. There are just too many variables not mentioned in the Q&A and it doesn't appear clear what a decisive "answer" would look like to the very general question of optimization when there are so many variables at play.
I have a couple of ANT projects for several different clients; the directory structure I have for my projects looks like this:
L___standard_workspace
L___.hg
L___validation_commons-sub-proj <- JS Library/Module
| L___java
| | L___jar
| L___old_stuff
| L___src
| | L___css
| | L___js
| | L___validation_commons
| L___src-test
| L___js
L___v_file_attachment-sub-proj <- JS Library/Module
| L___java
| | L___jar
| L___src
| | L___css
| | L___js
| L___src-test
| L___js
L___z_business_logic-sub-proj <- JS Library/Module
| L___java
| | L___jar
| L___src
| L___css
| L___js
L____master-proj <- Master web-deployment module where js libraries are compiled to.
L___docs
L___java
| L___jar
| L___src
| L___AntTasks
| L___build
| | L___classes
| | L___com
| | L___company
| L___dist
| L___nbproject
| | L___private
| L___src
| L___com
| L___company
L___remoteConfig
L___src
| L___css
| | L___blueprint
| | | L___plugins
| | | | L___buttons
| | | | | L___icons
| | | | L___fancy-type
| | | | L___link-icons
| | | | | L___icons
| | | | L___rtl
| | | L___src
| | L___jsmvc
| L___img
| | L___background-shadows
| | L___banners
| | L___menu
| L___js
| | L___approve
| | L___cart
| | L___confirm
| | L___history
| | L___jsmvc
| | L___mixed
| | L___office
| L___stylesheets
| L___swf
L___src-standard
Within the working copy the modules compile the sub-project into a single Javascript file that is placed in the Javascript directory of the master project.
For example, the directories:
validation_commons-sub-proj
v_file_attachment-sub-proj
z_business_logic-sub-proj
...all are combined and minified (sort of like compiled) into a different Javascript filename in the _master-proj/js directory; and in the final step the _master-proj is compiled to be deployed to the server.
Now in regards to the way I'd like to set this up with hg, what I'd like to be able to do is clone the master project and its sub-projects from their own base-line repositories into a client's working-copy, so that modules can be added (using hg) to a particular customer's working copy.
Additionally however, when I do make some changes to/fix bugs in one customer's working copy, I would like to be able to optionally push the changes/bug fixes back to the master project/sub-project's base-line repository, for purposes of eventually pulling the changes/fixes into other customer's working copies that might contain the same bugs that need to be fixed.
In this way I will be able to utilize the same bug fixes across different clients.
However...I am uncertain of the best way to do this using hg and Eclipse.
I read here that you can use hg's Convert Extension to split a sub-directory into a separate project using the --filemap option.
However, I'm still a little bit confused as to if it would be better to use the Convert Extension or if it would be better to just house each of the modules in their own repository and check them out into a single workspace for each client.
Yep, it looks like subrepos are what you are looking for, but I think maybe that is the right answer for the wrong question and I strongly suspect that you'll run into similar issues that occur when using svn:externals
Instead I would recommend that you "publish" your combined and minified JS files to an artefact repository and use a dependency manager such as Ivy to pull specific versions of your artefacts into your master project. This approach give you far greater control over the sub-project versions your master project uses.
If you need to make bug fixes to a sub-project for a particular client, you can just make the fixes on the mainline for that sub-project, publish a new version (ideally via an automated build pipeline) and update their master project to use the new version. Oh, you wanted to test the new version with the their master project before publishing? In that case, before you push your fix, combine and minify your sub-project locally, publish it to a local repository and have the client's master project pick up that version for your testing.