Delete "views" in Cloudant to make space - ibm-cloud

I am currently using the Lite version of Cloudant and I have reached the 1GB limit that is offered.
I tried to delete some data but as you can see in the picture below, the actual data in my database is not very heavy.
Most of the space seems to be taken up by views. Does anyone know what this represents and how we can get rid of them such that I can make some space in the database?

Views are secondary indexes generated by map and reduce functions in your design documents. They may have been created by a developer directly, or behind your back if you are using an application such as NodeRed. If you delete a design document, the associated index should be removed, but this may of course affect the functionality of whatever it is using your Cloudant database.
Removing views WILL break any application expecting to find them there. Think carefully if this is really what you want to do. You should think about backing up your data first (https://github.com/cloudant/couchbackup).
Views are stored in design documents. They are documents where the id starts with _design. You can list design docs using curl:
% curl 'https://USER:PASS#USER.cloudant.com/DATABASE_all_docs?startkey="_design/"&endkey="_design0"'
{"total_rows":8747,"offset":5352,"rows":[
{"id":"_design/names","key":"_design/names","value":{"rev":"1-4b72567e275bec45a1e37562a707e363"}},
{"id":"_design/queries","key":"_design/queries","value":{"rev":"7-7e128fa652e9a1942fb8a01f07ec497c"}},
{"id":"_design/routeid","key":"_design/routeid","value":{"rev":"1-a04ab1fc814ac1eaa0b445aece032945"}},
{"id":"_design/setters","key":"_design/setters","value":{"rev":"1-7bf0fc0255244248de4f89a20ff730f4"}}
]}
You can then delete those with a curl -XDELETE ... -- or you can do it via the Cloudant dashboard.

Related

UI5: Retrieve and display thousands of items in sap.m.Table

There is a relational database (MySQL 8) with tens of thousands of items in the table, which need to be displayed in sap.m.Table. The straight forward approach is to retrieve all the items with SQL-query and to deliver it to the client-side in JSON in an async way. The key drawback of this approach is performance and memory consumption at the client-side. The whole table needs to be displayed on the client-side to provide user and ability to conduct fast searches. This is crucial for the app.
Currently, there are two options:
Fetch top 100 records and push them into the table. This way user can search the last 100 records immediately. At the same time to run an additional query in a web worker, which will take about 2…5 seconds and get all records except those 100. Then, to merge two JSONs.
Keep JSON on the application server as a cached variable and update it when the user adds a new record or deletes a record. Then I fetch the JSON which supposed to be much faster than querying the database.
How can I show in OpenUI5's sap.m.Table thousands of items?
My opinion;
You need to create OData backend for your tables. User can filter or search records with OData capabilities. You don't need to push all data to client, sap.m.Table automatically request rest of data with OData protocol while user scroll the table.
Quick answer you can`t.
Use sap.ui.table or provide a proper odata service with top/skip support as shown here under 4.3 and 4.4.
Based on your backend code(java, abap, node) there are libs to help you.
The SAP recommandation says max 100 datasets for sap.m.Table. In praxis, I would advise to follow the recommendation, even on fast PC the rendering will be slowed down.
If you want to test more than 100 datasets, you need to set the size limit on your oModel like oModel.setSizeLimit(1000);

When testing POST (create mongo entries), how to delete entries in DB w/ Jmeter after testing, if you don't have DELETE endpoints?

I'm sure I can write an easy script that simply drops the entire collection from the database but that seems very clumsy as a long term solution.
Currently, we don't have delete endpoints that actually DELETE, we have PUT endpoints that mark the entry as "DONT SHOW/REMOVED" and another "undelete endpoint" that restores the viewing since we technically don't want to delete any data in our implementation of this medical database, for liability purposes.
Does Jmeter have a way where I can make it talk to Mongo and delete? I know there is a deprecated way to talk to mongo via Jmeter but not sure about any modern solutions.
Since I can't add unused code into the repo, does this mean the only solution is for me to make a "extra endpoint" outside of the repo that Jmeter can access to delete each entry?
Seems like a viable solution just not sure if that's the only way to go about it and if I'm missing something.
MongoDB Test Elements were deprecated due to low interest as keeping the MongoDB driver which is being shipped with JMeter up-to-date would require extra effort and the number of users of the MongoDB Test Elements was not that high.
Mailing List Message
Associated JMeter issue
However given you don't test MongoDB per se and plan to use JMeter MongoDB elements only for setup/teardown actions I believe you can go ahead.
You can get MongoDB test elements back by adding the next line to user.properties file:
not_in_menu
This will "unhide" MongoDB Source Config and MongoDB Script elements which you will be able to use for cleaning up the DB. See How to Load Test MongoDB with JMeter for more information, sample queries, tips and tricks.

SonarQube DB lacking values

I connected my sonarqube server to my postgres db however when I view the the "metrics" table, it lacks the actual value of the metric.
Those are all the columns I get, which are not particularly helpful. How can I get the actual values of the metrics?
My end goal is to obtain metrics such as duplicate code, function size, complexity etc. on my projects. I understand I could also use the REST api to do this however another application I am using will need a db to extract data from.
As far as i know connecting to db just helps to store data, not to display data.
You can check stored data on sonarqube's gui
Click on project
Click on Activity

Meteor app as front end to externally updated mongo database

I'm trying to set up an app that will act as a front end to an externally updated mongo database. The data will be pushed into the database by another process.
I so far have the app connecting to the external mongo instance and pulling data out with on issues, but its not reactive (not seeing any of the new data going into the mongo database).
I've done some digging and it so far can only find that I might need to set up replica sets and use oplog, is there a way to do this without going to replica sets (or is that the best way anyway)?
The code so far is really simple, a single collection, a single publication (pulling out the last 10 records from the database) and a single template just displaying that data.
No deps that I've written (not sure if that's what I'm missing).
Thanks.
Any reason not to use Oplog? For what I've read it is the recommended approach even if your DB isn't updated by an external process, and a must if it does.
Nevertheless, without Oplog your app should see the changes on the DB made by the external process anyway. It should take longer (up to 10 seconds), but it should update.

Is Memcached for me?

I am SysAdmin for a couple of large online shops and I'm researching Memcached as a possible caching solution.
The most accessed queries are the ones which make up the dynamic product pages, so it would make sense to cache these. Staff regularly use an update program to update the tables with new prices. As I understand, if I used Memcached the changes would only be apparent after the cache expires and not after my program has updated.
In the docs, I can see "Memcache::flush" which flushes ALL existing items, but is there a way to flush an individual object?
You can see in docs that there is delete command that removes one item. Also there is a set to add or replace one item.
The most important part is to have a solid naming scheme on your keys. Presumably you have a cms type page to update/insert rows in your database (mysql?). Just ensure that you delete the memcache record whenever you do an update in mysql and you'll be fine.