Google Cloud Storage - Download EMEA - google-cloud-storage

I'am new user on GCP-Storage. I have been charged for large downloads to EMEA(region), but my service(API) conects only to Americas.
Question:
Would anyone please advise on how to set StackDriver or other tool to properly monitor storage at CGP egress?
Thanks alot,
Adriano

Stackdriver isn't currently the best tool for this I believe. You should set up a budget and alerts. Just go to billing in the menu, then budget & alerts. The UI should be self-explanatory.
That's the most important part! Only the budget set there can definitely safe you from exorbitant bills if someone is trying to ruin you/send spam from your instances etc.
Stackdriver's integration with the billing system is currently rather weak, possibly because it's an outside technology that was aquired. Here's what you can do:
If you create a chart, set resource type to Pub/Sub Topic you can choose Costs of Operations which shows your costs live (but you can't create alerts on it).
The GCE metrics include outbound and inbound traffic, so create a chart for that as well.
In the alerts section, you can add an alert to notify you when your hourly egress crosses a threshold you can define – Or suddenly increases in a way it usually doesn't. Note that it could be hard to avoid false positives. Try to find out the maximum throughput of your instances, calculate how fast you need to be alerted and set these alerts based on that value.
If you see a sudden, sustained spike in traffic check the logs. Depending on what ports your firewall configuration has opened, you may find the cause in the webserver or sshd logs. (Oh yeah: go to the network settings and disable all ports you don't need).
If that doesn't help, you'll have to leave google and ssh into the machine(s). A tool I've used before and which is quite easy is nethogs.
$ sudo apt-get install nethogs
[...]
The following NEW packages will be installed:
nethogs
[...[
Setting up nethogs (0.8.1-0.3) ...
$ sudo nethogs
TOTAL 2.873 1.829 KB/sec
NetHogs version 0.8.1
PID USER PROGRAM DEV SENT RECEIVED
1975 root /opt/google-fluentd/embedded/bin/ruby ens4 0.480 0.999 KB/sec
23054 root /usr/bin/python ens4 0.021 0.412 KB/sec
[...]
That will show you the process and then, well – it depends on what that is.
Note that one possibility is that you have been hacked and the server(s) are used for spam or porn distribution or whatever. In that case it's possible (or actually to be expected) that the tools on the server have been patched/replaced to hide the intrusion. Search for rootkit detection if there's a mismatch between the server's internal data and the google tools. If you cannot exclude this possibility with certainty, do not attempt to remove the rootkit. Power down the server(s), create new ones from scratch and, if unavoidable, mount the old disks as read-only partitions in another clean instance to extract the data with utmost care.

Related

How to secure a Coral Dev Board in a public place?

I want to deploy a small object detection app in a lobby, but I would like to prevent unauthorized physical access. The device logs in automatically on boot, so anyone can access it with a keyboard. How could I prevent that? Thank you!
In the end, I opted to disable the login for the mendel user and to also lock it. Instead of using /bin/false, I opted to place my own script in /usr/bin/guard.sh that creates an .UNAUTHORISED_LOGIN file in mendel's home directory in case that someone tries to open a terminal on the device. Basically I ran the following commands:
chmod +x guard.sh
sudo cp guard.sh /usr/bin
sudo chsh -s /usr/bin/guard.sh mendel
sudo usermod -L mendel
guard.sh contents:
#!/bin/bash
touch /home/mendel/.UNAUTHORISED_LOGIN
Maybe you can try blacklisting the usb-storage driver?
Create this file:
sudo vim /etc/modprobe.d/blacklist.conf
Write this line into the file:
usb-storage
Save, close, and reboot.
Nam's suggestion is good. It locks out usb-storage, but still allows the usb camera to work. You could lock out a USB keyboard that way too. With effort, you can plug lots of potential attack points, including login passwords for MDT and serial access. Perhaps you will superglue the USB camera in place, or secure the whole assembly in a locked box.
Coral development is primarily focused on embedded ML inference on the edge TPU, and not the security tradeoffs of deployment. What follows are some untested suggestions, not documented recommendations.
Electronic tampering is important to address on any internet-connected device. We do not recommend deploying Mendel for end applications. It is for development only. Use a yocto build to include only what is necessary for your application, and be sure to include all the latest security patches.
Protecting against physical tampering could be an infinite challenge. First, determine the level of attack to be expected, and go no further. Some businesses have armed security. Most businesses have unarmed security. My home has no security guards.
Do you need a locked box with tamper switches? ATM machines and point-of-sale terminals have published standards to keep them secure enough. Perhaps a locked box is sufficient. An attacker could cut the cables, and take the box if its not bolted down, but could not quickly compromise the device.
Once you have a security plan, its important to get an outside review. They can help you decide: Does this plan protect against the expected attack vectors? Are there any other attack vectors that must be addressed for this level of security? Are there elements of the plan that are too much for this level of security? Depending upon the application, it might be reasonable to hire penetration testers to get a realistic evaluation when it is ready.
To disable the automatic login using HDMI, I found that sudo systemctl set-default multi-user will do the trick

Deploy code to multiple production servers under the load balancer without continuous deployments

I am the only developer (full-stack) in my company I have too much work other than automating the deployments as of now. In the future, we may hire a DevOps guy for the same.
Problem: We have 3 servers under Load Balancer. I don't want to block 2nd & 3rd servers till the 1st server updated and repeat the same with 2nd & 3rd because there might be huge traffic for one server initially and may fail at some specif time before other servers go live.
Server 1
User's ----> Load Balancer ----> Server 2 -----> Database
Server 3
Personal Opinion: Is there a way where we can pull the code by writing any scripts in the Load Balancer. I can replace the traditional Digital Ocean load balancer with Nginx Server making it a reverse proxy.
NOTE: I know there are plenty of other questions asked in Stack
Overflow on the same but none of them solves my queries.
Solutions I know
GIT Hooks - I know somewhat about GIT Hooks but don't want to use it because if I commit to master branch by mistake then it must not get sync to my production and create havoc in the live server and live users.
Open multiple tabs of servers and do it manually (Current Scenario). Believe me its pain in the ass :)
Any suggestions or redirects to the solutions will be really helpful for me. Thanks in advance.
One of the solutions is to write ansible playbook for this. With Ansible, you can specify to run it per one host at the time and also as the last step you can include verification check that checks if your application responds with response code 200 or it can query some endpoint that indicates the status of your application. If the check fails, Ansible will stop the execution. For example, in your case, Server1 deploys fine, but on server2 it fails. The playbook will stop and you will have servers 1 and 3 running.
I have done it myself. Works fine in environments without continuous deployments.
Here is one example

Sophos UTM VPN not accessible

I used the Sophos UTM 9.510 ha_standalone Cloudformation template (https://github.com/sophos-iaas/aws-cf-templates/blob/master/utm/9.510/standalone.template) and used defaults when possible. I did not use an existing ElasticIP, so it created it's own at (scrubbed) 50.12.12.123.
I gave a hostname at (for example) vpn.example.com and after creation, I created an A record for vpn.example.com to point to 50.12.12.123.
I don't have a license and just pay hourly for the AMI.
I understand that I should be able to hit https://vpn.example.com:4444 or https://50.12.12.123:4444 to see the admin panel. However, it times out and doesn't load anything.
When I deployed the stack, I got an email at the admin email I provided and it said REST daemon not running - restarted. I assume it restarted fine, since I have received no new emails, and the EC2 instance is running.
Has anyone else experienced this? Is there a step I'm missing? Aside from creating the Route53 record, I thought the Cloudformation Template should just work right out of the box.
The default security groups blocked traffic. I modified one of them to accept all traffic and the dashboard became accessible. I will now refine access further.

How to MFDeploy a configuration file

Colleagues and users testing various features in a program use MFDeploy to install for example "MyApp.exe" onto their Netduino +2. This method works great. Is there a way to also MFDeploy a "MyApp.config" text file so they can set their specific network criteria (like Port#) or other program preferences? Obviously, more robust preferences can be set from desktop software or web app AFTER the connection is established.
After several days researching, I could not find a viable means of transferring a config file via MFDeploy. Decided to add a "/install" command line option to the desktop app:
cncBuddyUI.exe [/help|/?] [/reset] [/discover] [/install:[axisA=X|Y] ,port=9999]]
/help|/? Show this help/usage information
/reset Create new default software configuration
/discover Listen for cncBuddyCAM broadcasting IPAddress & Port (timeout 30 secs)
/install Install hardware specific settings on Netduino+2 SDCard.
port Network port number (default=80)
axisA Slave axisA motor signals to X or Y axis
During "/install" mode, once cncBuddyCAM (Netduino app) network connects to cncBuddyUI (desktop app), the configuration parameters are transmitted and written onto the SDCard (\SD\config.txt).
Every warm boot now reads \SD\config.txt at startup and loads the configuration parameters into the appropriate application variables.
After several weeks of usage, I find this method preferable and easier to customize. Check out cncBuddy on Github.

what's the purpose of the '--delete-after' option of wget?

I came across the "--delete-after" option when I was reading the manpage of wget ?
what's the purpose of providing such an option ? Is it just for testing the page is ok for downloading ? Or maybe there are other situations where this option is useful, I hope you guys may give me some hints.
With reference to your comments above. I'm providing some examples of how we use it. We have a few websites running on Rackspace Cloud Sites which is a managed cloud hosting solution. We don't have access to regular cron.
We had an issue with runaway usage on a site using WordPress because WP kept calling wp-cron.php. To give you a sense of runaway usage, it used up in one day the allotted CPU cycles for a month. Anyway what I did was disable wp-cron.php being called within the WordPress system and manually call it through wget. I'm not interested in the output from the process so if I don't use --delete-after with wget (wget ... > /dev/null 2>&1 works well too) the folder where wget runs would get filled with hundreds of useless logs and output of each time the script was called.
We also have SugarCRM installed and that system requires its cron script to be called to handle system maintenance. We use wget silently for that as well. Basically a lot of these kinds of web-based systems have cron scripts. If you can't call your scripts directly say using php on the machine then the other option is calling it silently with wget.
The command to call these cron scripts is quite basic - wget --delete-after http://example.com/cron.php?parameters=if+needed
I'm using wget (with cron) to automate commands to a web application, so I have no interest in the contents of the pages. --delete-after is ideal for this.
You can use it for testing if a page is downloading ok, but usually it's used to force proxy servers to cache their contents.
If your sitting on a connection where there's a network appliance caching content between the site and your endpoint, and you have a site that's popular among users on that network, then what you may want to do as a sysadmin, is to use a down level machine just after the proxy to script a recursive "-r" or mirror "-m" wget operation.
The proxy appliance will see this and pre-cache the site and it's assets, thus making site accesses for uses after said proxy a bit faster.
You'd then want to specify "--delete-after" to free up the disk space used unless your wanting to keep a local copy of all sites you force to cache.
Sometimes you only need to visit a website to set an IP address - say if you are rolling your own dyn dns service.