Setting up Cygnus-PostgreSQL for historical data persistance - postgresql

Now that I have a real device (LWM2M -using wakaama implementation) running and sending data to Orion (I can confirm from server log, 'Observers created successfully'), I want to proceed with historical data storage.
I am not sure how to start. Using docker-compose file to start all services. I already have postgres image pulled and running. Would like to use it for persistence.
I guess I need to create the db schema to use for storage, any link to cygnus-postgres installation/persistence would be appreciated.

Should anyone be looking for how to go about similar situation as I did (above refers), I found the explanations here the customisations section very helpful, please read it.

Related

Google Cloud SQL Migration Job stuck on Running

I've got a database on Google SQL that is used by our application running on kubernetes in GKE.
The mysql instance is running on 5.6, and I need to update it to 5.7, so I tried using the new migration jobs.
I've set up the connection profile and all the required permissions for the source DB, then followed the instructions to make a continuous migration.
The Job says it's running, migrating the ~450GB database. After about a day, it's still running, the storage used seems to have stopped growing, and the replication delay is at 0. The source database is not currently in use (That's why I'm unsing it to try this out before doing the same with a more important db).
According to this, if the dump phase is done, I should be able to promote the instance, but the promote button remains greyed out, and there's no way to check the running state (it only says "running", and I don't see any way to check if it's dumping, on CDC, or anything else).
The documentation seems a bit lacking, and I couldn't find anything by googling around. Has anyone been using this?
In short, my questions are:
Why can't I promote the instance?
and how can I check in what phase is the migration?
Here's a screencap of my job:
link because SO doesn't let me embed images yet
Thanks.
p.d.: the tag that the documentation says should be used in stackoverflow is: google-cloud-database-migration-service, which is too long and stackoverflow doesn't allow, so I used google-cloud-sql instead :/
I am seeing an issue like this, but possibly more frustrating. After a week for a 2TB database, storage resets to near-zero and the full dump restarts, without any errors or indication of what happened.

Data is not being saved in mongdb

Hi i am using mongodb and deploying it in AWS. But the data is not properly being saved in the server.
I created many collections but the data is not present inside the collection.
Do i need any other setting . Please let me know
The database named READ_ME_TO_RECOVER_YOUR_DATA suggests that you created the mongod server without authentication, and some hackers were able to steal/delete all of your data, and are probably now expecting you to pay some bitcoin to get it back.
I doubt they actually made a backup of your data before deleting it, since they don't actually care about you or your data.
There was a blog post from the MongoDB folks a couple of years ago about how to avoid this: https://www.mongodb.com/blog/post/update-how-to-avoid-a-malicious-attack-that-ransoms-your-data
#1 recommendation is to enable authentication.

High Availability AEM Author

I’ve been working with AEM for over a year now and lately I’ve been trying to move into a high availability setup for author.
My problem is when ever I spin up a server, add sites, and spin up another server the data doesn’t persist to the new instance. I know why this doesn’t work in the traditional setup (repository is stored locally on the file system). However, I’ve attempted using the S3 backend, and it results in the same problem where the data doesn’t persist onto the new instance.
Ive read about using mongoMK (https://helpx.adobe.com/experience-manager/6-3/sites/deploying/using/recommended-deploys.html), I.e. mongodb as a store, but they also recommended using S3 as the backend.
My question is, does anyone have any experience with multiple AEM author instances sharing the same data and node stores, if so do you have any suggestions as to how to get this working or resources where I can read about this?
After further research it seems the only option for backend clustering is to use mongodb. My attempts to use mongodb with AEM as a backend have failed. When I attempt to use the crx3 and crx3mongo run modes it looks like AEM hangs after opening a connection to mongo. I have verified that nothing is getting placed into the DB via a show dbs returning 0.000GB for the corresponfing database.

Why is my mongodb collection deleted automatically?

I have a MongoDB client in three EC2 instances and I have created a replica set. Last time I had a problem, of space constraint which stopped my mongod process, thereby halting the application and now in an instance couple of days back, some of my tables were gone from database, so I set logging and all to my database just to catch if anything like that happens again. In a fresh incident this morning I was unable to login to my system and that's when I found out that whole database was empty. I checked other SO question like this which suggest setting up a TTL.Which I haven't done at all.
Now how do I debug this situation and do a proper root cause analysis? I can't even find anything in my debug logs as well. The tables just vanished. How do I set up proper logging mechanism and how do I ensure that all my tables are never ever deleted again?
Today I got a mail from Amazon that I was probably running an unsecured version of MongoDB and that may have caused this issue. So who ever is facing this issue please go through the Security Checklist Provided by MongoDB. There are some points that are absolutely necessary in there.
1. Enable Access Control and Enforce Authentication
2. Encrypt Communication
3. Limit Network Exposure
These three are the core and depending upon how many people access your database you can Configure Role-Based Access Control.
These are all the things I have done. Before this incident I had not taken security that seriously but after I was hit by it. I made sure I have all the necessary precautions in place.
Hope this helps someone.

iOS encryption to use web data securely

I'm developing an app that's pretty simple, and the important part of it is the content, which consists of lots of info that has been gathered over many years. I want to format it in a nice way to show to the user.
When the user downloads the app and first loads it, it goes to the server to get the whole database into the phone. Then, he can see the important items, and sort/filter through them. To avoid somebody taking my database, I'll use a SSL connection. I know if they want they could use the app to see every piece of content one by one, but there's nothing to do about that.
The thing is: I have the data in the cloud (mine). I can securely download it using an SSL connection (any other ideas to secure the transfer?). When I get it here, I'll save it in a db (Core Data is the obvious choice).
How can I secure the data in the internal database, so if the app is hacked, someone cannot access the db? I would put it in the keychain but it's a rather large db for that and it's not that important. (It's not sensible info, just info I don't want anybody to get massively.)
The other thing I could do is to never store anything in the device and have the user always making calls to the cloud, but I think this would be too time consuming. And just give him the option to save their favorite picks to the device. But that's too time consuming and there is the sync issue.
This is a reference I looked up about a similar issue, without the part I'm asking answered:
How to encrypt iPhone upload and download of info?
Basically, the only choice is to use SqlCipher. Of course, you have to port it to iPhone yourself (unless someone else has posted a port since last I looked). But it's not an insurmountable task.
Of course, even with SqlCipher you have the challenge of storing the key somehow. There's no really secure way to do this -- you have to use some form of "security by obscurity".
Why not just have some private key info stored in the code, and then when you want to download the database just have it query the server with the key? That way you wan't need to worry about SSL or encryption in the downloading part. In regards to storing it I agree with Hot Licks, SqlCipher appears to be the best and only option. However watch out for encryption, as you will have to declare it to apple and get all kinds of export permits (http://stackoverflow.com/questions/2135081/does-my-application-contain-encryption).
Hope this helps,
Jonathan