Deleting (dangling) forwarding-rules in gcloud - gcloud

We have a script that spawns clusters ip addresses and adds dns records for test environment in google cloud using gcloud container cluster create .. etc.
Then we will kubernetes and add a loadbalancer in front of the proxy.
Also there is a script that then deletes cluster again later, and ip and dns records.
Now i just released this month my payment was double as usual and after some research i found out i paid a big amount for Network Load Balancing: Forwarding Rule Additional Service Charge.
I went to the Networking > Load Balancing > Forwarding rules and here i see tons of forwarding rules forwarding to clusters that not even exist anymore. but these forwarding rules (that I never explicitly created, only indirectly through kubectl is suppose) are now still here.
The problem is i also have some which are actually in use and should not be removed. is there any way I Can filter on dangling forwarding rules i tried it but i couldn't find a way to manually do this.
I tried:
gcloud compute forwarding-rules list
gcloud compute forwarding-rules list --filter=dangling #returns nothing
gcloud compute target-pools list

To make sure gcloud no longer lets us pay for things they forget to delete i made a simple node script that removes dangling artifacts from the compute part of gcloud:
I'm not sure if its watertight because i shortcut forwarding-rules and target-pools references.
missing parts in this script (you'd have to implement):
const Promise = require('bluebird')
Extension method lastItem() on Array.prototype
cmd.executeDirect() => child_process.execFile() in a Promise.
script:
/**
* Remove left over compute items indirectly created by kubernetes but not removed automatically
*/
deleteDanglingComputeItems() {
return this.getComputeInstances()
.then(instances => {
var dict = {};
instances.forEach(x => dict[x.name] = true);
var killPoolsPromise = this.getComputeTargetpools()
.filter(pool => !pool.instances.some(inst => dict[inst.split("/").lastItem()] === true))
.then(pools => {
var poolDict = {};
pools.forEach(x => poolDict[x.name] = true)
return this.getComputeForwardingRules()
.filter(rules => poolDict[rules.target.split('/').lastItem()] === true)
.map(rule => this.deleteComputeForwardingRule(rule.name, rule.region))
.then(() => pools);
})
.map(pool => this.deleteComputeTargetpool(pool.name, pool.region))
var firewallPromise = this.getComputeFirewallRules()
.filter(rule => rule.targetTags && !rule.targetTags.some(target => dict[target] === true))
.then(rules => this.deleteFirewallRules(rules.map(x=> x.name)))
//Todo add firewall
return Promise.all([killPoolsPromise, firewallPromise]);
});
}
getComputeInstances() {
return this._execJson(['compute', 'instances', 'list']);
}
getComputeTargetpools() {
return this._execJson(['compute', 'target-pools', 'list'])
}
getComputeForwardingRules() {
return this._execJson(['compute', 'forwarding-rules', 'list'])
}
getComputeFirewallRules() {
return this._execJson(['compute', 'firewall-rules', 'list'])
}
deleteComputeTargetpool(pool, region) {
return this._execJson(['compute', 'target-pools', 'delete', pool, '--region', region])
}
deleteComputeTargetpools(pools = []) {
if (pools.length === 0)
return Promise.resolve();
return this._execJson(['compute', 'target-pools', 'delete', ...pools])
}
deleteComputeForwardingRule(rule, region) {
return this._execJson(['compute', 'forwarding-rules', 'delete', rule, '--region', region])
}
deleteFirewallRules(rules = []) {
if (rules.length === 0)
return Promise.resolve();
return this._execJson(['compute', 'firewall-rules', 'delete', ...rules])
}
/**
* #param {string[]} parameters
* #param {ExecFileOptions} options
*/
_exec(parameters, options) {
return cmd.get('gcloud', parameters, options);
}
/**
* #returns {Promise} with result data.
*/
_execJson(parameters, optionFlags = {}, cmdOptions) {
return this._exec(parameters.concat(this._generateFlags(optionFlags)), cmdOptions)
.promise
.tap(c => { log.debug(c.stderr); log.debug(c.stdout); })
.then(x => JSON.parse(x.stdout));
}

Related

where is the real quasar ssr express server?

I am building a quasar and vue.js app and I want to add a MongoDB API with an express server, there that /ssr-src/ dir where there is an index.js file with basic express app routing:
/*
* This file runs in a Node context (it's NOT transpiled by Babel), so use only
* the ES6 features that are supported by your Node version. https://node.green/
*
* WARNING!
* If you import anything from node_modules, then make sure that the package is specified
* in package.json > dependencies and NOT in devDependencies
*
* Note: This file is used only for PRODUCTION. It is not picked up while in dev mode.
* If you are looking to add common DEV & PROD logic to the express app, then use
* "src-ssr/extension.js"
*/
console.log("got here!") // I added
const express = require("express"),
compression = require("compression");
const ssr = require("quasar-ssr"),
extension = require("./extension"),
app = express(),
port = process.env.PORT || 3000;
const serve = (path, cache) =>
express.static(ssr.resolveWWW(path), {
maxAge: cache ? 1000 * 60 * 60 * 24 * 30 : 0
});
// gzip
app.use(compression({ threshold: 0 }));
// serve this with no cache, if built with PWA:
if (ssr.settings.pwa) {
app.use("/service-worker.js", serve("service-worker.js"));
}
// serve "www" folder
app.use("/", serve(".", true));
// we extend the custom common dev & prod parts here
extension.extendApp({ app, ssr });
// this should be last get(), rendering with SSR
app.get("*", (req, res) => {
res.setHeader("Content-Type", "text/html");
// SECURITY HEADERS
// read more about headers here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers
// the following headers help protect your site from common XSS attacks in browsers that respect headers
// you will probably want to use .env variables to drop in appropriate URLs below,
// and potentially look here for inspiration:
// https://ponyfoo.com/articles/content-security-policy-in-express-apps
// https://developer.mozilla.org/en-us/docs/Web/HTTP/Headers/X-Frame-Options
// res.setHeader('X-frame-options', 'SAMEORIGIN') // one of DENY | SAMEORIGIN | ALLOW-FROM https://example.com
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
// res.setHeader('X-XSS-Protection', 1)
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
// res.setHeader('X-Content-Type-Options', 'nosniff')
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin
// res.setHeader('Access-Control-Allow-Origin', '*') // one of '*', '<origin>' where origin is one SINGLE origin
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-DNS-Prefetch-Control
// res.setHeader('X-DNS-Prefetch-Control', 'off') // may be slower, but stops some leaks
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
// res.setHeader('Content-Security-Policy', 'default-src https:')
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/sandbox
// res.setHeader('Content-Security-Policy', 'sandbox') // this will lockdown your server!!!
// here are a few that you might like to consider adding to your CSP
// object-src, media-src, script-src, frame-src, unsafe-inline
ssr.renderToString({ req, res }, (err, html) => {
if (err) {
if (err.url) {
res.redirect(err.url);
} else if (err.code === 404) {
console.log(404,'!!')
res.status(404).send("404 | Page Not Found foo bar"); // I added foo bar
} else {
// Render Error Page or Redirect
res.status(500).send("500 | Internal Server Error");
if (ssr.settings.debug) {
console.error(`500 on ${req.url}`);
console.error(err);
console.error(err.stack);
}
}
} else {
res.send(html);
}
});
});
app.listen(port, () => {
console.log(`Server listening at port ${port}`);
});
but none of my logs or changings are happening when I run $ queser dev -m ssr
also the Server listening at port ${port} is not showing.
need your help!
quasar version 1.0.7
debian 10
src-ssr/index.js
From the comment notes, it seems like it is for Production only and will not visible in dev mode.
"Note: This file is used only for PRODUCTION"
You may want to use the src-ssr/extension.js instead.

How to set HTTP headers win axios.interceptors?

I copied the following code from Amazon Cognito Vuex Module examples to my Vue.js app:
axios.interceptors.request.use(async config => {
const response = await store.dispatch('getUserSession');
if (response && response.accessToken && response.accessToken.jwtToken) {
config.headers.awsToken = response.accessToken.jwtToken;
}
return config;
});
and expected to see in the request headers something like
awsToken: AzWDF....
, but actually I am getting:
Why 'awstoken' goes to 'Access-Control-Request-Headers' and why it does not have a value?
I also tried
config.headers.common['awsToken'] = response.accessToken.jwtToken;
but with the same result.
It is not a problem of AWS, because response.accessToken.jwtToken has a valid non-empty value.
EDIT1: and even this example does not work in my app and gives the same result:
axios.interceptors.request.use(config => {
config.headers['Authorization'] = 'Bearer XYZ';
return Promise.resolve(config);
},
(error) => {
return Promise.reject(error);
});
EDIT2: I found similar post.

AWS Lambda timing out on MongoDB Atlas connection

I'm just trying to write a simple Lambda function to insert data into my MongoDB Atlas cluster. I've set the cluster to accept all incoming traffic (0.0.0.0/0) and confirmed that I can connect locally.
For AWS Lambda, I set up a VPC using the VPC wizard, and I gave my Lambda function a security role with full admin access. I set the timeout to 12 seconds, but I'm still getting the following error:
Response:
{
"errorMessage": "2018-11-19T15:17:23.200Z 3048e1fd-ec0e-11e8-a03d-fb79584484c5 Task timed out after 11.01 seconds"
}
Request ID:
"3048e1fd-ec0e-11e8-a03d-fb79584484c5"
Function Logs:
START RequestId: 3048e1fd-ec0e-11e8-a03d-fb79584484c5 Version: $LATEST
2018-11-19T15:17:12.191Z 3048e1fd-ec0e-11e8-a03d-fb79584484c5 Calling MongoDB Atlas from AWS Lambda with event: {"address":{"street":"2 Avenue","zipcode":"10075","building":"1480","coord":[-73.9557413,40.7720266]},"borough":"Manhattan","cuisine":"Italian","grades":[{"date":"2014-10-01T00:00:00Z","grade":"A","score":11},{"date":"2014-01-16T00:00:00Z","grade":"B","score":17}],"name":"Vella","restaurant_id":"41704620"}
2018-11-19T15:17:12.208Z 3048e1fd-ec0e-11e8-a03d-fb79584484c5 => connecting to database
2018-11-19T15:17:12.248Z 3048e1fd-ec0e-11e8-a03d-fb79584484c5 (node:1) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
END RequestId: 3048e1fd-ec0e-11e8-a03d-fb79584484c5
REPORT RequestId: 3048e1fd-ec0e-11e8-a03d-fb79584484c5 Duration: 11011.08 ms Billed Duration: 11000 ms Memory Size: 128 MB Max Memory Used: 29 MB
2018-11-19T15:17:23.200Z 3048e1fd-ec0e-11e8-a03d-fb79584484c5 Task timed out after 11.01 seconds
The relevant part of my code for connecting is (with user and pass being the appropriate values):
const MongoClient = require('mongodb').MongoClient;
let atlas_connection_uri = "mongodb+srv://<user>:<pass>#restaurantcluster-2ylyf.gcp.mongodb.net/testdb"
let cachedDb = null;
exports.handler = (event, context, callback) => {
var uri = atlas_connection_uri
if (atlas_connection_uri != null) {
processEvent(event, context, callback);
}
else {
atlas_connection_uri = uri;
console.log('the Atlas connection string is ' + atlas_connection_uri);
processEvent(event, context, callback);
}
};
function processEvent(event, context, callback) {
console.log('Calling MongoDB Atlas from AWS Lambda with event: ' + JSON.stringify(event));
var jsonContents = JSON.parse(JSON.stringify(event));
//date conversion for grades array
if(jsonContents.grades != null) {
for(var i = 0, len=jsonContents.grades.length; i < len; i++) {
jsonContents.grades[i].date = new Date();
}
}
context.callbackWaitsForEmptyEventLoop = false;
try {
if (cachedDb == null) {
console.log('=> connecting to database');
MongoClient.connect(atlas_connection_uri, function (err, client) {
cachedDb = client.db('testdb');
return createDoc(cachedDb, jsonContents, callback);
});
}
else {
createDoc(cachedDb, jsonContents, callback);
}
}
catch (err) {
console.error('an error occurred', err);
}
}
I suspect that something is going on with my VPC firewall/permissions/security group considering the fact that I can connect from my local machine, but I have no idea how that could be the case when I'm granting full admins privileges in my security role and I've set all outgoing VPC traffic to my public subnet.
I would appreciate any advice/help in solving this!
edit to provide more info:
The function console.logs '=> connecting to database' and then immediately times out at MongoClient.connect (confirmed by attempting to console.log directly after that).

jupyter-js-services - how to save notebook

I'm trying to use jupyter as a backend for my system and now I play with examples from jupyter-js-api docs.
Using IKernel and INotebookSession I managed to execute simple code and get the response form kernel.
But I can's figure out how to extract the notebook itself. there's nothing like "saveNotebook()" in API. I try to execute session.renameNotebook(), it completes successfully, but no files appear in filesystem (tried different paths like "/tmp/trynote.ipynb" "trynote.ipnb" and so on...).
Here's the code, it is slightly edited example from http://jupyter.org/jupyter-js-services/ page
#!/usr/bin/env node
var jpt = require("jupyter-js-services");
var xr = require("xmlhttprequest");
var ws = require("ws");
global.XMLHttpRequest = xr.XMLHttpRequest;
global.WebSocket = ws;
// start a new session
var options = {
baseUrl: 'http://localhost:8889',
wsUrl: 'ws://localhost:8889',
kernelName: 'python',
notebookPath: 'trynote.ipynb'
};
jpt.startNewSession(options).then((session) => {
// execute and handle replies on the kernel
var future = session.kernel.execute({ code: 'print(5 * 5);' });
future.onDone = (msg) => {
console.log('Future is fulfilled: ');
console.log(msg);
};
future.onIOPub = (msg) => {
console.log("Message in IOPub: ");
console.log(msg);
};
// rename the notebook
session.renameNotebook('trynote2.ipynb').then(() => {
console.log('Notebook renamed to', session.notebookPath);
});
// register a callback for when the session dies
session.sessionDied.connect(() => {
console.log('session died');
});
// kill the session
session.shutdown().then(() => {
console.log('session closed');
});
});
Looking and ContentManager API it seems to work with already existing files, or creating new ones, but its unclear how is it bound to sessions.
More, even simplest try to use "newUntitled" function gives 404 response...
var contents = new jpt.ContentsManager('http://localhost:8889');
// create a new python file
contents.newUntitled("foo", { type: "file", ext: "py" }).then(
(model) => {
console.log(model.path);
}
);
I feel a bit disoriented with all this and would appreciate any explanations.
Thanks..

What are different between Backend vs Frontend Cache of Zend Framework

I am implementing caching for my website which is using Zend Framework.
I look into the source code and see that:
Zend_Cache::factory()
always need two configurations of backend and frontend.
And my issue is:
I don't know why backend is set inside frontend,
and what is the difference between them?
$frontendObject->setBackend($backendObject);
return $frontendObject;
Here is the orginal source code:
public static function factory($frontend, $backend, $frontendOptions = array(), $backendOptions = array(), $customFrontendNaming = false, $customBackendNaming = false, $autoload = false)
{
if (is_string($backend)) {
$backendObject = self::_makeBackend($backend, $backendOptions, $customBackendNaming, $autoload);
} else {
if ((is_object($backend)) && (in_array('Zend_Cache_Backend_Interface', class_implements($backend)))) {
$backendObject = $backend;
} else {
self::throwException('backend must be a backend name (string) or an object which implements Zend_Cache_Backend_Interface');
}
}
if (is_string($frontend)) {
$frontendObject = self::_makeFrontend($frontend, $frontendOptions, $customFrontendNaming, $autoload);
} else {
if (is_object($frontend)) {
$frontendObject = $frontend;
} else {
self::throwException('frontend must be a frontend name (string) or an object');
}
}
$frontendObject->setBackend($backendObject);
return $frontendObject;
}
The cache backend is the "cache engine" : it can be file, memcached, etc.
The cache frontend specify what kind of data will be stored in the cache (see http://framework.zend.com/manual/1.12/en/zend.cache.frontends.html)