How can I see the creation time stamp of data being accessed in Google Cloud Storage by looking at the audit logs? [closed] - google-cloud-storage

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
Improve this question
I am analysing some GCS audit logs, I want to categorize the data in buckets being accessed based on how old the data is. Lets' say there is a bucket mybucket having file1 uploaded in 2020, file2 in 2021 and file3 in 2022. So while analysing the auditlogs, I want to be able to group the access patterns based on in which year that data was created ? My question is - Do we have an option of getting the created on metadata for the data being accessed in auditlogs ? Or if there is some better way of achieving this, please share. Thanks!

Since Audit logs do not show creation timestamps of resources accessed, you can try using a cloud asset inventory cloud asset inventory docs export to create a BigQuery table of the bucket name/time created and join that onto the IAM policy audit log table reference docs.

Related

Deleted resources in kubernetes [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 20 days ago.
Improve this question
Someone deleted the deployment and i tried to find out from the event logs but i found below response:
No resources found in prometheus namespace.
Is there plugin or something to let me know who deleted this resource?
thanks a lot in advance.
Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster. The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself.
As per the document:
Auditing allows cluster administrators to answer the following
questions:
what happened?
when did it happen?
who initiated it?
on what did it happen?
where was it observed?
from where was it initiated?
to where was it going?

Export outlook data to local drive and then delete off exchange server [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 days ago.
Improve this question
A user of mine has 99% full disk space. They need their files for future reference but need to get rid of them somehow.
I want to export all Outlook data from 1/1/2022 and older, save it to their OneDrive, and then delete what I just exported from the exchange server.
Whats the most efficient way of doing that?
I tried archiving but I learned that makes a copy of the data and keeps it on the server.
I tried doing an export, but that appears to be almost the same as archiving but just not a "live" version.
I tried manually searching a date range, moving them to a folder, and then deleting them, but that was going to take FOREVER because of how long it took to load.
A user of mine has 99% full disk space.
In that case you can limit the size of cached data in Outlook or just consider using the non-cached mode where all the data will be retrieved from the Exchange server online. For example:
You can read more about that in the Managing Outlook Cached Mode and OST File Sizes article.

Share same disk with different servers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I am not very exerienced in Posgresql and I did not find a solution to this question yet.
I have data in a database stored in a cloud space (onedrive). I can manage them by my Windows 10 installation of Postgres 12. I have also a laptop with linux (manjaro) installed and Postgresql server.
I would like to understand if I can access to the same cloud data with both servers. Consider that since I am a single user, data access is never concurrent. I use only one server per time.
I read it is possible to share the same data in:
https://www.postgresql.org/docs/12/different-replication-solutions.html
Howverer, I cannot find a detailed procedure. Any suggestion ? Probably there is a better solution ? Thanks
Your question is pretty unclear; I'll try my best to straighten you out.
First, a database doesn't access data, it holds data. So your desire to have two database servers access the same data does not make a lot of sense.
There are three options that come to mind:
You want to hold only one copy of the data and want to access it from different points: in that case you don't need a server running on your laptop, only a database client that accesses the database in the cloud.
This is the normal way to operate.
You want a database on your laptop to hold a cooy of the data from the other database, but you want to be able to modify the copy, so that the two database will diverge: use pg_dump to export the definitions and data from the cloud database and restore the dump locally.
You want the database on the laptop to be and remain a faithful copy of the cloud database: if your cloud provider allows replication connections, set up your laptop database as a streaming replication standby of the cloud database.
The consequence is that you only have read access to the local cooy of the database.

Lotus Notes: Corrupted ID file [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I'm working as service desk guy. An user has reported today, that he's unable to start his Lotus Notes client because of getting this error: Problem with ID. I have asked our Lotus Notes/Domino administrators if they have any idea what might be wrong, but they just simply turned me down saying the user should ask his local IT team for help, which is hardly a solution, because the user is somewhere in Asia on weak (if any) internet connection most of the time.
I'm quite positive something must be wrong with his local ID file, because he is able to send e-mails from Lotus Traveler and probably webmail too (but hasn't confirmed that yet).
I would be grateful for any suggestions what might be causing the error and how to fix it.
If the Id file is corrupt then the only solution is to replace the id file with a working copy. There is no "id- fixup" or any other cure for a defective id.
In an environment, where all possibilities that IBM delivers are in use, getting a new id file is as easy as deleting the old one and restarting the client.
This function is called "Id Vault".
Without a vault, someone at the helpdesk has to either
- recover the ide using the id recovery (if in use, it is the function that was used before id vault)
- get the id file from a backup
- recreate the id file (loosing all encrypted data that has been encrypted with the broken id)
So: there is nothing for you to do, as all of these steps need a domino- administrator.
As written earlier: the only chance is to rename the id... If ID Vault is in place, it will be automatically recreated

Looking for a REST-based remote filesystem [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
This is a very open/general question (I hope not too general anyway:))
I'm looking for a library/module that could be plugged in a web server (like apache) and handle REST requests to store / retrieve / delete files.
Something like Amazon's S3 or Windows Azure storage, but open-sourced.
Does such a thing exist?
mod_dav? DAV is the original generic/bare-bones REST. You PUT files, then you can GET them back or DELETE them... But that doesn't provide any management by itself, and maybe that is that you are looking for. Have you looked into OpenStack, specifically the object storage component?
There is OpenStack SWIFT which is open source clone of the Amazon's s3. It is lineary scalable and provides REST interface to the data. http://swift.openstack.org/
I solved a similar problem using Node-FSAPI, a NodeJS-based server that exposes a selected part of the file system as a REST api. (It's not an Apache module like you asked for, but it solves the same problem.)
Are you looking for a distributed file system at the same time? If so I suggest using Apache Hadoop's HDFS and the WebHDFS REST API to access the file system.
How ever I am not sure whether this can be deployed as an extension to Apache or any other web server :-( Just wanted to share this idea, if you are looking for a distributed file system with guaranteed reliability etc.