Error deploying to LocalStack with message "[ServiceName] has an ongoing operation in progress and is not stable" - localstack

I'm trying to deploy a CloudFormation stack to LocalStack but I'm getting an error. I think that it is related to the message "has an ongoing operation in progress and is not stable".
These are the commands I am running. The first two commands seem to run fine, but I get the error on the third.
cdklocal synth --no-staging --app "npx ts-node --prefer-ts-exts packages/localstack-pipeline/index.ts" > template.yaml
cdklocal bootstrap --app "npx ts-node --prefer-ts-exts packages/localstack-pipeline/index.ts"
cdklocal deploy --app "npx ts-node --prefer-ts-exts packages/localstack-pipeline/index.ts" --require-approval never
Below is the verbose log output. (I also included the screenshot for the pretty colors.)
LenderReportingPipelineStack: deploying...
Retrieved account ID 000000000000 from disk cache
Assuming role 'arn:aws:iam::000000000000:role/cdk-hnb659fds-deploy-role-000000000000-us-east-1'.
Waiting for stack CDKToolkit to finish creating or updating...
[0%] start: Publishing a5774a0664d245dfd1d6b06a7c700d2bbdcc4cb179851ac3124e6347fc826f58:000000000000-us-east-1
Retrieved account ID 000000000000 from disk cache
Assuming role 'arn:aws:iam::000000000000:role/cdk-hnb659fds-file-publishing-role-000000000000-us-east-1'.
[0%] check: Check s3://cdk-hnb659fds-assets-000000000000-us-east-1/a5774a0664d245dfd1d6b06a7c700d2bbdcc4cb179851ac3124e6347fc826f58.json
[0%] found: Found s3://cdk-hnb659fds-assets-000000000000-us-east-1/a5774a0664d245dfd1d6b06a7c700d2bbdcc4cb179851ac3124e6347fc826f58.json
[100%] success: Published a5774a0664d245dfd1d6b06a7c700d2bbdcc4cb179851ac3124e6347fc826f58:000000000000-us-east-1
LenderReportingPipelineStack: checking if we can skip deploy
LenderReportingPipelineStack: template has changed
LenderReportingPipelineStack: deploying...
Removing existing change set with name cdk-deploy-change-set if it exists
Attempting to create ChangeSet with name cdk-deploy-change-set to update stack LenderReportingPipelineStack
LenderReportingPipelineStack: creating CloudFormation changeset...
Initiated creation of changeset: arn:aws:cloudformation:us-east-1:000000000000:changeSet/cdk-deploy-change-set/e1f5df83; waiting for it to finish creating...
Waiting for changeset cdk-deploy-change-set on stack LenderReportingPipelineStack to finish creating...
Initiating execution of changeset arn:aws:cloudformation:us-east-1:000000000000:changeSet/cdk-deploy-change-set/e1f5df83 on stack LenderReportingPipelineStack
Execution of changeset arn:aws:cloudformation:us-east-1:000000000000:changeSet/cdk-deploy-change-set/e1f5df83 on stack LenderReportingPipelineStack has started; waiting for the update to complete...
Waiting for stack LenderReportingPipelineStack to finish creating or updating...
Stack LenderReportingPipelineStack has an ongoing operation in progress and is not stable (CREATE_IN_PROGRESS (Deployment succeeded))
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | CREATE_IN_PROGRESS | AWS::CloudFormation::Stack | LenderReportingPipelineStack
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/ArtifactsBucketEncryptionKey (PipelineArtifactsBucketEncryptionKeyF5BF0670)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/ArtifactsBucketEncryptionKeyAlias (PipelineArtifactsBucketEncryptionKeyAlias94A07392)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/ArtifactsBucket (PipelineArtifactsBucketAEA9A052)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/ArtifactsBucket/Policy (PipelineArtifactsBucketPolicyF53CCC52)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/Role (PipelineRoleB27FAA37)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/Role/DefaultPolicy (PipelineRoleDefaultPolicy7BDC1ABB)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline (Pipeline9850B417)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/Build/Synth/CodePipelineActionRole (PipelineBuildSynthCodePipelineActionRole4E7A6C97)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/Build/Synth/CodePipelineActionRole/DefaultPolicy (PipelineBuildSynthCodePipelineActionRoleDefaultPolicy92C90290)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/Build/Synth/CdkBuildProject/Role (PipelineBuildSynthCdkBuildProjectRole231EEA2A)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/Build/Synth/CdkBuildProject/Role/DefaultPolicy (PipelineBuildSynthCdkBuildProjectRoleDefaultPolicyFB6C941C)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/Build/Synth/CdkBuildProject (PipelineBuildSynthCdkBuildProject6BEFA8E6)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/UpdatePipeline/SelfMutate/CodePipelineActionRole (PipelineUpdatePipelineSelfMutateCodePipelineActionRoleD6D4E5CF)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/UpdatePipeline/SelfMutate/CodePipelineActionRole/DefaultPolicy (PipelineUpdatePipelineSelfMutateCodePipelineActionRoleDefaultPolicyE626265B)
LenderReportingPipelineStack | 0/42 | 7:51:56 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/local/Migrations/CodePipelineActionRole (PipelinelocalMigrationsCodePipelineActionRole80230725)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/local/Migrations/CodePipelineActionRole/DefaultPolicy (PipelinelocalMigrationsCodePipelineActionRoleDefaultPolicyDA557DA9)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/local/Migrations/Project/Role (PipelinelocalMigrationsProjectRole137B1A17)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/local/Migrations/Project/Role/DefaultPolicy (PipelinelocalMigrationsProjectRoleDefaultPolicy527888FB)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Pipeline/local/Migrations/Project (PipelinelocalMigrationsProject746BB2AD)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/UpdatePipeline/SelfMutation/Role (PipelineUpdatePipelineSelfMutationRole57E559E8)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/UpdatePipeline/SelfMutation/Role/DefaultPolicy (PipelineUpdatePipelineSelfMutationRoleDefaultPolicyA225DA4E)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/UpdatePipeline/SelfMutation (PipelineUpdatePipelineSelfMutationDAA41400)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileRole (PipelineAssetsFileRole59943A77)
LenderReportingPipelineStack | 0/42 | 7:51:57 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileRole/DefaultPolicy (PipelineAssetsFileRoleDefaultPolicy14DB8755)
LenderReportingPipelineStack | 0/42 | 7:51:58 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset1/Default (PipelineAssetsFileAsset185A67CB4)
LenderReportingPipelineStack | 0/42 | 7:51:58 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset2/Default (PipelineAssetsFileAsset24D2D639B)
LenderReportingPipelineStack | 0/42 | 7:51:58 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset3/Default (PipelineAssetsFileAsset3FE71B523)
LenderReportingPipelineStack | 0/42 | 7:51:58 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset4/Default (PipelineAssetsFileAsset474303B7D)
LenderReportingPipelineStack | 0/42 | 7:51:58 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset5/Default (PipelineAssetsFileAsset5184A5C2F)
LenderReportingPipelineStack | 0/42 | 7:51:58 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset6/Default (PipelineAssetsFileAsset669C72F3C)
LenderReportingPipelineStack | 0/42 | 7:51:58 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset7/Default (PipelineAssetsFileAsset7A51C54D0)
LenderReportingPipelineStack | 0/42 | 7:51:58 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset8/Default (PipelineAssetsFileAsset81DAB433B)
LenderReportingPipelineStack | 0/42 | 7:51:59 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset9/Default (PipelineAssetsFileAsset9F08741A2)
LenderReportingPipelineStack | 0/42 | 7:51:59 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset10/Default (PipelineAssetsFileAsset102484C317)
LenderReportingPipelineStack | 0/42 | 7:51:59 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset11/Default (PipelineAssetsFileAsset11D534F05C)
LenderReportingPipelineStack | 0/42 | 7:51:59 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset12/Default (PipelineAssetsFileAsset12E916E136)
LenderReportingPipelineStack | 0/42 | 7:51:59 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset13/Default (PipelineAssetsFileAsset13BC3E73E7)
LenderReportingPipelineStack | 0/42 | 7:51:59 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset14/Default (PipelineAssetsFileAsset14A3CA232F)
LenderReportingPipelineStack | 0/42 | 7:52:00 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | Pipeline/Assets/FileAsset15/Default (PipelineAssetsFileAsset15B22B8501)
LenderReportingPipelineStack | 0/42 | 7:52:00 AM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | CDKMetadata/Default (CDKMetadata)
LenderReportingPipelineStack | 1/42 | 7:52:05 AM | UPDATE_COMPLETE | AWS::CloudFormation::Stack | Pipeline/Pipeline/ArtifactsBucketEncryptionKey (PipelineArtifactsBucketEncryptionKeyF5BF0670)
LenderReportingPipelineStack | 2/42 | 7:52:05 AM | UPDATE_COMPLETE | AWS::CloudFormation::Stack | Pipeline/Pipeline/ArtifactsBucketEncryptionKeyAlias (PipelineArtifactsBucketEncryptionKeyAlias94A07392)
LenderReportingPipelineStack | 3/42 | 7:52:05 AM | UPDATE_COMPLETE | AWS::CloudFormation::Stack | Pipeline/Pipeline/ArtifactsBucket (PipelineArtifactsBucketAEA9A052)
LenderReportingPipelineStack | 4/42 | 7:52:06 AM | UPDATE_COMPLETE | AWS::CloudFormation::Stack | Pipeline/Pipeline/ArtifactsBucket/Policy (PipelineArtifactsBucketPolicyF53CCC52)
LenderReportingPipelineStack | 5/42 | 7:52:06 AM | UPDATE_COMPLETE | AWS::CloudFormation::Stack | Pipeline/Pipeline/Role (PipelineRoleB27FAA37)
LenderReportingPipelineStack | 5/42 | 7:52:07 AM | CREATE_FAILED | AWS::CloudFormation::Stack | LenderReportingPipelineStack
Failed resources:
❌ LenderReportingPipelineStack failed: Error: The stack named LenderReportingPipelineStack failed to deploy: CREATE_FAILED (Deployment failed)
at waitForStackDeploy (/home/jonathanbyrne/.nvm/versions/node/v14.19.0/lib/node_modules/aws-cdk/lib/api/util/cloudformation.ts:309:11)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at prepareAndExecuteChangeSet (/home/jonathanbyrne/.nvm/versions/node/v14.19.0/lib/node_modules/aws-cdk/lib/api/deploy-stack.ts:355:26)
at CdkToolkit.deploy (/home/jonathanbyrne/.nvm/versions/node/v14.19.0/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:208:24)
at initCommandLine (/home/jonathanbyrne/.nvm/versions/node/v14.19.0/lib/node_modules/aws-cdk/lib/cli.ts:310:12)
The stack named LenderReportingPipelineStack failed to deploy: CREATE_FAILED (Deployment failed)
Error: The stack named LenderReportingPipelineStack failed to deploy: CREATE_FAILED (Deployment failed)
at waitForStackDeploy (/home/jonathanbyrne/.nvm/versions/node/v14.19.0/lib/node_modules/aws-cdk/lib/api/util/cloudformation.ts:309:11)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at prepareAndExecuteChangeSet (/home/jonathanbyrne/.nvm/versions/node/v14.19.0/lib/node_modules/aws-cdk/lib/api/deploy-stack.ts:355:26)
at CdkToolkit.deploy (/home/jonathanbyrne/.nvm/versions/node/v14.19.0/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:208:24)
at initCommandLine (/home/jonathanbyrne/.nvm/versions/node/v14.19.0/lib/node_modules/aws-cdk/lib/cli.ts:310:12)

Related

Postgres cannot create database but can create a user

I am using Ubuntu Linux. I am trying to use Postgres as database. It is doing fine when I created a user:
CREATE USER username;
But when I try to create a database, it returns nothing:
CREATE DATABASE databasename;
What is happening with my Postgres?
datid | datname | pid | leader_pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | wait_event_type | wait_event | state | backend_xid | backend_xmin | query_id | query | backend_type
-------+----------+------+------------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+---------------------+--------+-------------+--------------+----------+---------------------------------+------------------------------
| | 8237 | | | | | | | | 2022-02-02 13:00:47.683187+07 | | | | Activity | AutoVacuumMain | | | | | | autovacuum launcher
| | 8239 | | 10 | postgres | | | | | 2022-02-02 13:00:47.70127+07 | | | | Activity | LogicalLauncherMain | | | | | | logical replication launcher
13726 | postgres | 8329 | | 10 | postgres | psql | | | -1 | 2022-02-02 13:08:52.250244+07 | 2022-02-02 13:09:10.651383+07 | 2022-02-02 13:09:10.651383+07 | 2022-02-02 13:09:10.651393+07 | Lock | object | active | | 740 | | CREATE DATABASE kong; | client backend
13726 | postgres | 8313 | | 10 | postgres | psql | | | -1 | 2022-02-02 13:04:57.265085+07 | 2022-02-02 13:10:40.097817+07 | 2022-02-02 13:10:40.097817+07 | 2022-02-02 13:10:40.09782+07 | | | active | | 740 | | SELECT * FROM pg_stat_activity; | client backend
| | 8235 | | | | | | | | 2022-02-02 13:00:47.664058+07 | | | | Activity | BgWriterHibernate | | | | | | background writer
| | 8234 | | | | | | | | 2022-02-02 13:00:47.654713+07 | | | | Activity | CheckpointerMain | | | | | | checkpointer
| | 8236 | | | | | | | | 2022-02-02 13:00:47.673631+07 | | | | Activity | WalWriterMain | | | | | | walwriter
(7 rows)
(END)
and for the pg_locks
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart
------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+------------------+---------+----------+-------------------------------
relation | 13726 | 12290 | | | | | | | | 7/17 | 8313 | AccessShareLock | t | t |
virtualxid | | | | | 7/17 | | | | | 7/17 | 8313 | ExclusiveLock | t | t |
virtualxid | | | | | 3/15 | | | | | 3/15 | 8329 | ExclusiveLock | t | t |
virtualxid | | | | | 6/12 | | | | | 6/12 | 8335 | ExclusiveLock | t | t |
virtualxid | | | | | 5/3 | | | | | 5/3 | 8266 | ExclusiveLock | t | t |
virtualxid | | | | | 4/1 | | | | | 4/1 | 8264 | ExclusiveLock | t | t |
object | 0 | | | | | | 1262 | 1 | 0 | 6/12 | 8335 | RowExclusiveLock | f | f | 2022-02-02 13:09:30.561821+07
object | 0 | | | | | | 1262 | 1 | 0 | 3/15 | 8329 | ShareLock | f | f | 2022-02-02 13:09:10.651571+07
object | 0 | | | | | | 1262 | 1 | 0 | 4/1 | 8264 | RowExclusiveLock | t | f |
relation | 0 | 1262 | | | | | | | | 3/15 | 8329 | AccessShareLock | t | f |
object | 0 | | | | | | 1262 | 1 | 0 | 5/3 | 8266 | RowExclusiveLock | t | f |
(11 rows)
(END)
Database info
postgres=# \l
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------+----------+----------+---------+---------+-----------------------+---------+------------+--------------------------------------------
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | 8529 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +| 8377 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +| 8529 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
(3 rows)
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
kong | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
postgres=#
Using the same database name and user is not good practice. This may result in various errors.
When you call the command
CREATE DATABASE databaseName;
PostgreSql creates a database. This may take some time. After creating the database, you will receive a message:
CREATE DATABASE
postgres=#
Problem solved by reinstalling the pg to its old version (was installed 14, downgrade to 12 and it solved). Thanks to everyone here who helped me

Aggregating data in an array based on date

I'm trying to aggregate data based on timestamp. Basically I'd like to create an array for each day.
So lets say I've a query like so:
SELECT date(task_start) AS started, task_start
FROM tt_records
GROUP BY started, task_start
ORDER BY started DESC;
The output is:
+------------+------------------------+
| started | task_start |
|------------+------------------------|
| 2021-08-30 | 2021-08-30 16:45:55+00 |
| 2021-08-29 | 2021-08-29 06:47:55+00 |
| 2021-08-29 | 2021-08-29 15:41:50+00 |
| 2021-08-28 | 2021-08-28 12:59:20+00 |
| 2021-08-28 | 2021-08-28 14:50:55+00 |
| 2021-08-26 | 2021-08-26 20:46:44+00 |
| 2021-08-24 | 2021-08-24 16:28:05+00 |
| 2021-08-23 | 2021-08-23 16:22:41+00 |
| 2021-08-22 | 2021-08-22 14:01:10+00 |
| 2021-08-21 | 2021-08-21 19:45:18+00 |
| 2021-08-11 | 2021-08-11 16:08:58+00 |
| 2021-07-28 | 2021-07-28 17:39:14+00 |
| 2021-07-19 | 2021-07-19 17:26:24+00 |
| 2021-07-18 | 2021-07-18 15:04:47+00 |
| 2021-06-24 | 2021-06-24 19:53:33+00 |
| 2021-06-22 | 2021-06-22 19:04:24+00 |
+------------+------------------------+
As you can see the started column has repeating dates.
What I'd like to have is:
+------------+--------------------------------------------------+
| started | task_start |
|------------+--------------------------------------------------|
| 2021-08-30 | [2021-08-30 16:45:55+00] |
| 2021-08-29 | [2021-08-29 06:47:55+00, 2021-08-29 15:41:50+00] |
| 2021-08-28 | [2021-08-28 12:59:20+00, 2021-08-28 14:50:55+00] |
| 2021-08-26 | [2021-08-26 20:46:44+00] |
| 2021-08-24 | [2021-08-24 16:28:05+00] |
| 2021-08-23 | [2021-08-23 16:22:41+00] |
| 2021-08-22 | [2021-08-22 14:01:10+00] |
| 2021-08-21 | [2021-08-21 19:45:18+00] |
| 2021-08-11 | [2021-08-11 16:08:58+00] |
| 2021-07-28 | [2021-07-28 17:39:14+00] |
| 2021-07-19 | [2021-07-19 17:26:24+00] |
| 2021-07-18 | [2021-07-18 15:04:47+00] |
| 2021-06-24 | [2021-06-24 19:53:33+00] |
| 2021-06-22 | [2021-06-22 19:04:24+00] |
+------------+--------------------------------------------------+
I need a query to achieve that. Thank you.
You can use array_agg()
SELECT date(task_start) AS started, array_agg(task_start)
FROM tt_records
GROUP BY started
ORDER BY started DESC;
If you want a JSON array, rather than a native Postgres array, use jsonb_agg() instead

differential osquery query output to "catchall" topic

I'm using osquery to monitor servers on my network. The following osquery.conf captures snapshots, every minute, of the processes communicating over the network ports and publishes that data to Kafka:
{
"options": {
"logger_kafka_brokers": "cp01.woolford.io:9092,cp02.woolford.io:9092,cp03.woolford.io:9092",
"logger_kafka_topic": "base_topic",
"logger_kafka_acks": "1"
},
"packs": {
"system-snapshot": {
"queries": {
"processes_by_port": {
"query": "select u.username, p.pid, p.name, pos.local_address, pos.local_port, pos.remote_address, pos.remote_port from processes p join users u on u.uid = p.uid join process_open_sockets pos on pos.pid=p.pid where pos.remote_port != '0'",
"interval": 60,
"snapshot": true
}
}
}
},
"kafka_topics": {
"process-port": [
"pack_system-snapshot_processes_by_port"
]
}
}
Here's an example of the output from the query:
osquery> select u.username, p.pid, p.name, pos.local_address, pos.local_port, pos.remote_address, pos.remote_port from processes p join users u on u.uid = p.uid join process_open_sockets pos on pos.pid=p.pid where pos.remote_port != '0';
+--------------------+-------+---------------+------------------+------------+------------------+-------------+
| username | pid | name | local_address | local_port | remote_address | remote_port |
+--------------------+-------+---------------+------------------+------------+------------------+-------------+
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 49018 | 10.0.1.41 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 49028 | 10.0.1.41 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 49026 | 10.0.1.41 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 50558 | 10.0.1.43 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 50554 | 10.0.1.43 | 9092 |
| cp-kafka-connect | 13646 | java | 10.0.1.41 | 49014 | 10.0.1.41 | 9092 |
| root | 1505 | sssd_be | 10.0.1.41 | 46436 | 10.0.1.89 | 389 |
...
| cp-ksql | 1757 | java | 10.0.1.41 | 56180 | 10.0.1.41 | 9092 |
| cp-ksql | 1757 | java | 10.0.1.41 | 53878 | 10.0.1.43 | 9092 |
| root | 19684 | sshd | 10.0.1.41 | 22 | 10.0.1.53 | 50238 |
| root | 24082 | sshd | 10.0.1.41 | 22 | 10.0.1.53 | 51233 |
| root | 24107 | java | 10.0.1.41 | 56052 | 10.0.1.41 | 9092 |
| root | 24107 | java | 10.0.1.41 | 56054 | 10.0.1.41 | 9092 |
| cp-schema-registry | 24694 | java | 10.0.1.41 | 50742 | 10.0.1.31 | 2181 |
| cp-schema-registry | 24694 | java | 10.0.1.41 | 47150 | 10.0.1.42 | 9093 |
| cp-schema-registry | 24694 | java | 10.0.1.41 | 58068 | 10.0.1.41 | 9093 |
| cp-schema-registry | 24694 | java | 10.0.1.41 | 47152 | 10.0.1.42 | 9093 |
| root | 25782 | osqueryd | 10.0.1.41 | 57700 | 10.0.1.43 | 9092 |
| root | 25782 | osqueryd | 10.0.1.41 | 56188 | 10.0.1.41 | 9092 |
+--------------------+-------+---------------+------------------+------------+------------------+-------------+
Instead of snapshots, I'd like osquery to capture differentials, i.e. to only publish the changes to Kafka.
I tried toggling the snapshot property from true to false. My expectation was that osquery would send the changes. For some reason, when I set "snapshot": false, no data is published to the process-port topic. Instead, all the data is routed to the catchall base_topic.
Can you see what I'm doing wrong?
Update:
I think I'm running into this bug: https://github.com/osquery/osquery/issues/5559
Here's a video walk-through: https://youtu.be/sPdlBBKgJmY
I filed a bug report, with steps to reproduce, in case it's not the same issue: https://github.com/osquery/osquery/issues/5890
Given the context, I can't immediately tell what is causing the issue you are experiencing.
In order to debug this, I would first try using the filesystem logger plugin instead of (or in addition to) the Kafka logger.
Do you get results to the Kafka topic when the query is configured as a snapshot? If so, are you able to verify that the results are actually changing such that a diff should be generate when the query runs in differential mode?
Can you see results logged locally when you use --logger_plugin=filesystem,kafka?

Inserts (JavaAPI) fail after restarting node in distributed OrientDB cluster

A two node distributed OrientDB system, embedded mode, using TCP-IP for node discovery. The class event is sharded on four clusters. After restarting one node, exactly half of the inserts on that node fail with the error message:
INFO Local node 'orientdb-lab-node2' is not the owner for cluster 'event_1' (it is 'orientdb-lab-node1'). Reloading distributed configuration for database 'test-db' [ODistributedStorage]
and the stack trace:
com.orientechnologies.orient.server.distributed.ODistributedConfigurationChangedException: Local node 'orientdb-lab-node2' is not the owner for cluster 'event_1' (it is 'orientdb-lab-node1')
DB name="test-db"
DB name="test-db"
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.throwSerializedException(OChannelBinaryAsynchClient.java:437)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.handleStatus(OChannelBinaryAsynchClient.java:388)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:270)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:162)
at com.orientechnologies.orient.client.remote.OStorageRemote.beginResponse(OStorageRemote.java:2138)
at com.orientechnologies.orient.client.remote.OStorageRemote$6.execute(OStorageRemote.java:548)
at com.orientechnologies.orient.client.remote.OStorageRemote$6.execute(OStorageRemote.java:542)
at com.orientechnologies.orient.client.remote.OStorageRemote$1.execute(OStorageRemote.java:164)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:235)
at com.orientechnologies.orient.client.remote.OStorageRemote.asyncNetworkOperation(OStorageRemote.java:156)
at com.orientechnologies.orient.client.remote.OStorageRemote.createRecord(OStorageRemote.java:528)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeSaveRecord(ODatabaseDocumentTx.java:2095)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveNew(OTransactionNoTx.java:246)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveRecord(OTransactionNoTx.java:179)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.save(ODatabaseDocumentTx.java:2597)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.save(ODatabaseDocumentTx.java:103)
at com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1802)
at com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1793)
at lab.orientdb.OrientDbClient.insert(OrientDbClient.java:10)
at lab.orientdb.Main.main(Main.java:24)
This is what the cluster configuration looks like from node1:
Node 1 and 2 running, 10 inserts on each node
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 8|orientdb-lab-node2|[orientdb-lab-node1]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 7|orientdb-lab-node2|[orientdb-lab-node1]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 20| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Node 2 stopped
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 8|orientdb-lab-node1|[orientdb-lab-node2]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 7|orientdb-lab-node1|[orientdb-lab-node2]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 20| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Node 2 restarted, 5 successful inserts and 5 failed
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 11|orientdb-lab-node2|[orientdb-lab-node1]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 9|orientdb-lab-node2|[orientdb-lab-node1]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 25| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Any tip or advice appreciated. Thanks.
This issue has been resolved on OrientDB 2.2.13-SNAPSHOT, so should be ok in a release version very soon: https://github.com/orientechnologies/orientdb/issues/6897

OrientDB distributed mode : data not getting distributed across various nodes

I have started an OrientDB Enterprise 2.7 with two nodes. Here is how my setup look.
CONFIGURED SERVERS
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|# |Name |Status|Connections|StartedOn |Binary |HTTP |UsedMemory |FreeMemory |MaxMemory|
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|0 |Batman|ONLINE|3 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|480.98MB (94.49%)|28.02MB (5.51%) |509.00MB |
|1 |Robin |ONLINE|3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |403.50MB (79.35%)|105.00MB (20.65%)|508.50MB |
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
orientdb {db=SocialPosts3}> clusters
Now I have two Vertex classes User and Notes. With an edge type Posted. All Vertex and Edges have properties. There are also unique index on both the Vertex class.
I started pushing data using Java API :
while (retry++ != MAX_RETRY) {
try {
properties.put(uniqueIndexname, uniqueIndexValue);
Iterable<Vertex> resultset = graph.getVertices(className, new String[] { uniqueIndexname },
new Object[] { uniqueIndexValue });
if (resultset != null) {
vertex = resultset.iterator().hasNext() ? resultset.iterator().next() : null;
}
if (vertex == null) {
vertex = graph.addVertex("class:" + className, properties);
graph.commit();
return vertex;
} else {
for (String key : properties.keySet()) {
vertex.setProperty(key, properties.get(key));
}
}
logger.info("Completed upserting vertex " + uniqueIndexValue);
graph.commit();
break;
} catch (ONeedRetryException ex) {
logger.warn("Retry for exception - " + uniqueIndexValue);
} catch (Exception e) {
logger.error("Can not create vertex - " + e.getMessage());
graph.rollback();
break;
}
}
Similarly for the Notes and edges.
I populate around 200k user and 3.5M Notes. Now I notice that all the data is going only to one node.
On running "clusters" command I see that all the clusters are created on the same node, and hence all data is present only on one node.
|22 |note | 26|Note | | 75| Robin | [Batman] | true |
|23 |note_1 | 27|Note | |1750902| Batman | [Robin] | true |
|24 |note_2 | 28|Note | |1750789| Batman | [Robin] | true |
|25 |note_3 | 29|Note | | 75| Robin | [Batman] | true |
|26 |posted | 34|Posted | | 0| Robin | [Batman] | true |
|27 |posted_1 | 35|Posted | | 1| Robin | [Batman] | true |
|28 |posted_2 | 36|Posted | |1739823| Batman | [Robin] | true |
|29 |posted_3 | 37|Posted | |1749250| Batman | [Robin] | true |
|30 |user | 30|User | | 102059| Batman | [Robin] | true |
|31 |user_1 | 31|User | | 1| Robin | [Batman] | true |
|32 |user_2 | 32|User | | 0| Robin | [Batman] | true |
|33 |user_3 | 33|User | | 102127| Batman | [Robin] | true |
I see the CPU of one node is like 99% and other is <1%.
How can I make sure that data is uniformly distributed across all nodes in the cluster?
Update:
Database is propagated to both the nodes. I can login to both the node studio and see the listed database. Also querying any node gives same results, so nodes are in sync.
Server Log from one of the node, and it is almost same on the other node.
2016-08-18 19:28:49:668 INFO [Robin]<-[Batman] Received new status Batman.SocialPosts3=SYNCHRONIZING [OHazelcastPlugin]
2016-08-18 19:28:49:670 INFO [Robin] Current node started as MASTER for database 'SocialPosts3' [OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=2)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+
| | | | MASTER |
| | | |SYNCHRONIZING|
+--------+-----------+----------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman |
+--------+-----------+----------+-------------+
|* | 1 | 1 | X |
|internal| 1 | 1 | |
+--------+-----------+----------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:766 INFO [Robin] Adding node 'Robin' in partition: SocialPosts3 db=[*] v=3 [ODistributedDatabaseImpl$1]
2016-08-18 19:28:49:767 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+-------------+
| | | | MASTER | MASTER |
| | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+-----------+----------+-------------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman | Robin |
+--------+-----------+----------+-------------+-------------+
|* | 2 | 1 | X | o |
|internal| 2 | 1 | | |
+--------+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:767 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:769 WARNI [Robin]->[[Batman]] Requesting deploy of database 'SocialPosts3' on local server... [OHazelcastPlugin]
2016-08-18 19:28:52:192 INFO [Robin]<-[Batman] Copying remote database 'SocialPosts3' to: /tmp/orientdb/install_SocialPosts3.zip [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin]<-[Batman] Installing database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3... [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin] - writing chunk #1 offset=0 size=43.38KB [OHazelcastPlugin]
2016-08-18 19:28:52:194 INFO [Robin] Database copied correctly, size=43.38KB [ODistributedAbstractPlugin$3]
2016-08-18 19:28:52:279 WARNI {db=SocialPosts3} Storage 'SocialPosts3' was not closed properly. Will try to recover from write ahead log [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 SEVER {db=SocialPosts3} Restore is not possible because write ahead log is empty. [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 INFO {db=SocialPosts3} Storage data recover was completed [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:294 INFO {db=SocialPosts3} [Robin] Installed database 'SocialPosts3' (LSN=OLogSequenceNumber{segment=0, position=24}) [OHazelcastPlugin]
2016-08-18 19:28:52:304 INFO [Robin] Reassigning cluster ownership for database SocialPosts3 [OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+----+-----------+----------+-------------+-------------+
| | | | | MASTER | MASTER |
| | | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+----+-----------+----------+-------------+-------------+
|CLUSTER | id|writeQuorum|readQuorum| Batman | Robin |
+--------+----+-----------+----------+-------------+-------------+
|* | | 2 | 1 | X | o |
|internal| 0| 2 | 1 | | |
+--------+----+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] Distributed servers status:
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Name |Status|Databases |Conns|StartedOn |Binary |HTTP |UsedMemory |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Batman|ONLINE|GoodBoys=ONLINE (MASTER) |5 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|426.47MB/509.00MB (83.79%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
|Robin*|ONLINE|GoodBoys=ONLINE (MASTER) |3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |353.77MB/507.50MB (69.71%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
| | |SocialPosts3=SYNCHRONIZING (MASTER) | | | | | |
| | |SocialPosts2=ONLINE (MASTER) | | | | | |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+