AppFog Claims I have used all two service slots when I have none in use - service

AppFog claims that I am using both of my available services, when in reality I have deleted the services that were in use. (I had MySQL databases attached to a couple of apps I was using, but when I deleted the apps, I also deleted the services... they just never free'd up for some reason)
Anyone have any suggestions on how I might reclaim those lost services? It's kinda hard to have apps without services and it won't show me anything to unbind or delete in order to free up those slots.
-Thanks
C:\Sites>af info
AppFog Free Your Cloud Edition
For support visit http://support.appfog.com
Target: https://api.appfog.com (v0.999)
Client: v0.3.18.12
User: j****g#gmail.com
Usage: Memory (0B of 512.0M total)
Services (2 of 2 total)
Apps (0 of 2 total)
C:\Sites>af services
============== System Services ==============
+------------+---------+-------------------------------+
| Service | Version | Description |
+------------+---------+-------------------------------+
| mongodb | 1.8 | MongoDB NoSQL store |
| mongodb2 | 2.4.8 | MongoDB2 NoSQL store |
| mysql | 5.1 | MySQL database service |
| postgresql | 9.1 | PostgreSQL database service |
| rabbitmq | 2.4 | RabbitMQ message queue |
| redis | 2.2 | Redis key-value store service |
+------------+---------+-------------------------------+
=========== Provisioned Services ============

Probably easiest to email support#appfog.com and get them to look into it.

Related

Google Cloud SQL Storage Expanding Issue

My PostgreSQL database storage is expanding way too much relative to my real database storage. I am assuming it is making logs for every actions against the database. If so, how do I turn it off?
All tables' storage size in the database:
table_name | pg_size_pretty
-----------------+----------------
matches | 3442 MB
temp_matches | 3016 MB
rankings | 262 MB
atp_matches | 41 MB
players | 11 MB
injuries | 4648 kB
tournaments | 1936 kB
temp_prematches | 112 kB
locations | 104 kB
countries | 16 kB
(10 rows)
My storage usage should be around 10GB.
Your PostgreSQL instance may have Point-in-time recovery (PITR) enabled.
To add explanation, PITR uses write ahead logs (WAL). It is necessary to archive the WAL files for instances it is enabled on. This archiving is done automatically on the backend and will consume storage space (even if the instance is idle) and therefore using this feature would result on an increased storage space on your DB instance.
Here's a similar issue: Google Cloud SQL - Postgresql storage keeps growing
You can stop the storage increase by disabling Point-in-time recovery: https://cloud.google.com/sql/docs/postgres/backup-recovery/pitr#disablingpitr
First, I recommend you verify if you have set the value "Enable automatic storage increases" since your instance's storage will continue increasing and your pocket will be affected.
Please keep in mind that you can increase storage size, but you cannot decrease it; the storage increases are permanent for the life of the instance. When you enable this setting, a spike in storage requirements can permanently increase storage costs (incrementally) for your instance.
On the other hand, If you have the Point-in-time recovery ( PITR ) enabled, I recommend you disable it in order to delete the logs. If not, I think that It would be necessary contact the GCP support team in order that they inspect your instance carefully.

ReadWriteMany volumes on kubernetes with terabytes of data

We want to deploy a k8s cluster which will run ~100 IO-heavy pods at the same time. They should all be able to access the same volume.
What we tried so far:
CephFS
was very complicated to set up. Hard to troubleshoot. In the end, it crashed a lot and the cause was not entirely clear.
Helm NFS Server Provisioner
runs pretty well, but when IO peaks a single replica is not enough. We could not get multiple replicas to work at all.
MinIO
is a great tool to create storage buckets in k8s. But our operations require fs mounting. That is theoretically possible with s3fs, but since we run ~100 pods, we would need to run 100 s3fs sidecars additionally. Thats seems like a bad idea.
There has to be some way to get 2TB of data mounted in a GKE cluster with relatively high availability?
Firestorage seems to work, but it's a magnitude more expensive than other solutions, and with a lot of IO operations it quickly becomes infeasible.
I contemplated creating this question on server fault, but the k8s community is a lot smaller than SO's.
I think I have a definitive answer as of Jan 2020, at least for our usecase:
| Solution | Complexity | Performance | Cost |
|-----------------|------------|-------------|----------------|
| NFS | Low | Low | Low |
| Cloud Filestore | Low | Mediocre? | Per Read/Write |
| CephFS | High* | High | Low |
* You need to add an additional step for GKE: Change the base image to ubuntu
I haven't benchmarked Filestore myself, but I'll just go with stringy05's response: others have trouble getting really good throughput from it
Ceph could be a lot easier if it was supported by Helm.

ECS Service Discovery is updated too late after task is stopped

Stackoverflow Service Discovery
Translate: box in shopping feed
Stories ship method
Review + Maand
Hi,
I'm running 2 AWS ECS services (A and B) within the same cluster, using the Fargate launch type.
Service A should be able to connect to service B. This is possible using Service Discovery.
I created a service discovery backend.local with the TTL of 15 seconds. The tasks in service B are added to a target-group which has a de-registration of 30 seconds.
+--------------+ +-------------+ +--------------+
| Application +-----> ECS: A +--------> ECS: B |
| Load | +-------------+ +--------------+
| Balancer | | Task 1 | | Task 1 |
+--------------+ | Task 2 | | Task . |
+-------------+ | Task n |
+--------------+
This is working perfect, from service A, I can do requests to http://backend.local, which are routed to one of the tasks in service B.
However, after a rolling deploy of service B, the service discovery DNS records aren't updated in time. So nslookup backend.local also returns IP addresses of the old tasks which are not available anymore.
The lifecylcle of tasks during deployment is:
New task: Pending -> Activating -> Running
Old task: Running -> Deactivating --> Stopped
I would expect that new task are discoverable AFTER they are 'Running', and not discoverable anymore when the target-groups de-registration delay kicks in.
How can I make sure that the Service Discovery doesn't make old tasks discoverable?

Can GPU supports multiple jobs without delay?

So I am running PyTorch deep learning job using GPU
but the job is pretty light.
My GPU has 8 GB but the job only uses 2 GB. Also GPU-Util is close to 0%.
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A |
| 0% 36C P2 45W / 210W | 1155MiB / 8116MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
based on GPU-Util and memory I might be able to fit in another 3 jobs.
However, I am not sure if that will affect the overall runtime.
If I run multiple jobs on same GPU does that affects the overall runtime?
I think tried once and I think there was delay.
Yes you can. One option is to use NVIDIA's Multi-Process Service (MPS) to run four copies of your model on the same card.
This is the best description I have found of how to do it: How do I use Nvidia Multi-process Service (MPS) to run multiple non-MPI CUDA applications?
If you are using your card for inference only, then you can host several models (either copies, or different models) on the same card using NVIDIA's TensorRT Inference Service.

All possible system keyspaces in Cassandra

I am trying to find a list of all the possible 'System' keyspaces that MAY exist in a DSC Cassandra database (System keyspaces are those which are not created by a user).
My experience thus far is I have found
[cqlsh 3.1.8 | Cassandra 1.2.15 | CQL spec 3.0.0 | Thrift protocol 19.36.2]
system system_traces OpsCenter
Are these the only available System Keyspaces or are there others? Does it depend on the version(1.2/2.0) and distribution(Apache/Datastax)?
I tried to search the documentation but no luck. Could anyone help me out here?
Only system, system_auth and system_traces are strictly "System", especially the 1st one.
OpsCenter is created for/by DataStax OpsCenter