How can I remove the double space to dot mapping in VIM? Or why is it even useful? - visual-studio-code

In my VIM-VSCode setup, I'm trying to edit the behaviour of the double space tap when in insert mode to NOT add a dot. I'm not sure either why this is considered useful behaviour?
The current fixes I found are not working and I'm not sure why:
"vim.insertModeKeyBindings": [
{
"before": [" ", " "],
"after": [" ", " "],
"commands": [":nohlsearch"]
}
]
"vim.insertModeKeyBindings": [
{
"before": [" ", " "],
"after": [],
"commands": []
}
]

Related

VSCodeVim Extension substitute keybindings

With the VSCodeVim Extension, I'd like to map a keybinding (leader + "p") to a substitute command that wraps the text on the current line with "print(TEXT)", so that e.g.,
f'The value of `x` is {x}.'
Becomes
print(f'The value of `x` is {x}.')
But I'm having trouble defining that key binding in VS Code settings.json. Although the following works as expected, moving the cursor to the top of the file:
"vim.normalModeKeyBindings": [
{
"before": ["<leader>", "g"],
"commands": [":1"]
}
]
Even this simple substitute command does not work:
"vim.normalModeKeyBindings": [
{
"before": ["<leader>", "p"],
"commands": [":s/a/A/"]
}
]
Instead, the text :s/a/A/ is entered in the command-mode prompt (I can see it!), but the buffer is functionally back in normal mode, so I can't submit the command. If I re-enter command-mode, the substitute command disappears.
For now I have this inane solution:
"vim.normalModeKeyBindings": [
{
"before": ["<leader>", "p"],
"after": [":", "s", "/", "^", "/", "p", "r", "i", "n", "t", "(", "/", "<Cr>", ":", "s", "/", "$", "/", ")", "/", "<Cr>"]
}
]
But it's ugly and tedious to code, so I'd like to find a better solution for other implementations of similar keybindings with substitute.
Bonus: If you can help me code a keybinding(s) that will effectively toggle the print statement in and out, that would be amazing. So another submission of (leader + "p") would bring us back to
f'The value of `x` is {x}.'
A hack might be to map (leader + "P") to delete the first 6 and final character of the line.

How to make vim extension in visual code studio to use binding that repeats change for next match

How can I make vim extension in VSC make to use my keybinding?
In my .vimrc I got those lines:
" s* - CRTL + D like behavior
nnoremap <silent> s* :let #/='\<'.expand('<cword>').'\>'<CR>cgn
xnoremap <silent> s* "sy:let #/=#s<CR>cgn
How could I add them to my VSC?
Tried to add to my settings.json something like below, but got no idea how to add it properly every time I got an error:
Failed to handle key `*`: command '"sy:let #/=#s<CR>cgn' not found
"vim.normalModeKeyBindings": [
{
"before": [
"s",
"*"
],
"commands": [
":nohl:let #/='\\<'.expand('<cword>').'\\>'<CR>cgn",
],
"silent": true
},
{
"before": [
"s",
"*"
],
"commands": [
"\"sy:let #/=#s<CR>cgn",
],
"silent": true
},
]

Cloudformation aws-glue inline command

My goal is to create a glue job via cloudformation. Problem that im dealing with is that Command property doesnt seem to support inline code (like cloudformation lamba Code property does).
My question, is there a way to create a fully functional glue job solely with cloudformation template, is there a way around uploading command file in advance (and specifying ScriptLocation) ?
Something like this may work (untested, so you might have to tweak). Basically you create a custom resource to facilitate uploading your glue code to S3 and then reference the custom resource to obtain the location. Below you'll find the CloudFormation template, and below that the Rubycfn code you can use to generate the template dynamically.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Glue Job Example Stack",
"Resources": {
"GlueCode": {
"Properties": {
"BucketName": "my-amazing-bucket-for-glue-jobs",
"Content": {
"Fn::Join": [
"\n",
[
"# some job code",
"# for glue"
]
]
},
"CreateMode": "plain-literal",
"FileName": "gluecode.sh",
"ServiceToken": {
"Fn::GetAtt": [
"InlineS3UploadFunction",
"Arn"
]
}
},
"Type": "Custom::InlineUpload"
},
"GlueJob": {
"Properties": {
"Command": {
"Name": "myglueetl",
"ScriptLocation": {
"Fn::Join": [
"",
[
"s3://my-amazing-bucket-for-glue-jobs/",
{
"Ref": "GlueCode"
}
]
]
}
},
"DefaultArguments": {
"--job-bookmark-option": "job-bookmark-enable"
},
"ExecutionProperty": {
"MaxConcurrentRuns": 2
},
"MaxRetries": 0,
"Name": "glue-job-1",
"Role": {
"Ref": "SomeRole"
}
},
"Type": "AWS::Glue::Job"
},
"InlineS3UploadFunction": {
"Properties": {
"Code": {
"ZipFile": {
"Fn::Join": [
"\n",
[
"import boto3",
"import cfnresponse",
"import hashlib",
"import json",
"import logging",
"import signal",
"import zipfile",
"",
"from urllib2 import build_opener, HTTPHandler, Request",
"",
"LOGGER = logging.getLogger()",
"LOGGER.setLevel(logging.INFO)",
"",
"def lambda_handler(event, context):",
" # Setup alarm for remaining runtime minus a second",
" try:",
" signal.alarm((context.get_remaining_time_in_millis() / 1000) - 1)",
" LOGGER.info('REQUEST RECEIVED: %s', event)",
" LOGGER.info('REQUEST RECEIVED: %s', context)",
" if event['RequestType'] == 'Create' or event['RequestType'] == 'Update':",
" LOGGER.info('Creating or updating S3 Object')",
" bucket_name = event['ResourceProperties']['BucketName']",
" file_name = event['ResourceProperties']['FileName']",
" content = event['ResourceProperties']['Content']",
" create_zip = True if event['ResourceProperties']['CreateMode'] == 'zip' else False",
" literal = True if event['ResourceProperties']['CreateMode'] == 'plain-literal' else False",
" md5_hash = hashlib.md5(content).hexdigest()",
" with open('/tmp/' + file_name, 'w') as lambda_file:",
" lambda_file.write(content)",
" lambda_file.close()",
" s3 = boto3.resource('s3')",
" if create_zip == True:",
" output_filename = file_name + '_' + md5_hash + '.zip'",
" zf = zipfile.ZipFile('/tmp/' + output_filename, mode='w')",
" try:",
" zf.write('/tmp/' + file_name, file_name)",
" finally:",
" zf.close()",
" data = open('/tmp/' + output_filename, 'rb')",
" s3.Bucket(bucket_name).put_object(Key=output_filename, Body=data)",
" else:",
" if literal == True:",
" data = open('/tmp/' + file_name, 'rb')",
" s3.Bucket(bucket_name).put_object(Key=file_name, Body=content)",
" else:",
" extension = file_name.split(\".\")[-1]",
" output_filename = \".\".join(file_name.split(\".\")[:-1]) + '_' + md5_hash + '.' + extension",
" data = open('/tmp/' + file_name, 'rb')",
" s3.Bucket(bucket_name).put_object(Key=output_filename, Body=content)",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, { 'Message': output_filename } )",
" elif event['RequestType'] == 'Delete':",
" LOGGER.info('DELETE!')",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, { 'Message': 'Resource deletion successful!'} )",
" else:",
" LOGGER.info('FAILED!')",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, { 'Message': 'There is no such success like failure.'} )",
" except Exception as e: #pylint: disable=W0702",
" LOGGER.info(e)",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, { 'Message': 'There is no such success like failure.' } )",
"",
"def timeout_handler(_signal, _frame):",
" '''Handle SIGALRM'''",
" LOGGER.info('Time exceeded')",
" raise Exception('Time exceeded')",
"",
"signal.signal(signal.SIGALRM, timeout_handler)"
]
]
}
},
"Handler": "index.lambda_handler",
"Role": {
"Fn::GetAtt": [
"LambdaExecutionRole",
"Arn"
]
},
"Runtime": "python2.7",
"Timeout": "30"
},
"Type": "AWS::Lambda::Function"
},
"LambdaExecutionRole": {
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": [
"sts:AssumeRole"
],
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
}
}
],
"Version": "2012-10-17"
},
"Path": "/",
"Policies": [
{
"PolicyDocument": {
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*"
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
}
],
"Version": "2012-10-17"
},
"PolicyName": "root"
}
]
},
"Type": "AWS::IAM::Role"
}
}
}
The Rubycfn code is below. Use the https://rubycfn.com/ CloudFormation compiler or the Rubycfn gem gem install rubycfn to compile:
ENV["GLUEBUCKET"] ||= "my-amazing-bucket-for-glue-jobs"
description "Glue Job Example Stack"
resource :glue_code,
type: "Custom::InlineUpload" do |r|
r.property(:service_token) { :inline_s3_upload_function.ref(:arn) }
r.property(:bucket_name) { ENV["GLUEBUCKET"] }
r.property(:file_name) { "gluecode.sh" }
r.property(:create_mode) { "plain-literal" }
r.property(:content) do
[
"# some job code",
"# for glue"
].fnjoin("\n")
end
end
resource :glue_job,
type: "AWS::Glue::Job" do |r|
r.property(:command) do
{
"Name": "myglueetl",
"ScriptLocation": ["s3://#{ENV["GLUEBUCKET"]}/", :glue_code.ref].fnjoin
}
end
r.property(:default_arguments) do
{
"--job-bookmark-option": "job-bookmark-enable"
}
end
r.property(:execution_property) do
{
"MaxConcurrentRuns": 2
}
end
r.property(:max_retries) { 0 }
r.property(:name) { "glue-job-1" }
r.property(:role) { :some_role.ref }
end
resource :inline_s3_upload_function,
type: "AWS::Lambda::Function" do |r|
r.property(:code) do
{
"ZipFile": [
"import boto3",
"import cfnresponse",
"import hashlib",
"import json",
"import logging",
"import signal",
"import zipfile",
"",
"from urllib2 import build_opener, HTTPHandler, Request",
"",
"LOGGER = logging.getLogger()",
"LOGGER.setLevel(logging.INFO)",
"",
"def lambda_handler(event, context):",
" # Setup alarm for remaining runtime minus a second",
" try:",
" signal.alarm((context.get_remaining_time_in_millis() / 1000) - 1)",
" LOGGER.info('REQUEST RECEIVED: %s', event)",
" LOGGER.info('REQUEST RECEIVED: %s', context)",
" if event['RequestType'] == 'Create' or event['RequestType'] == 'Update':",
" LOGGER.info('Creating or updating S3 Object')",
" bucket_name = event['ResourceProperties']['BucketName']",
" file_name = event['ResourceProperties']['FileName']",
" content = event['ResourceProperties']['Content']",
" create_zip = True if event['ResourceProperties']['CreateMode'] == 'zip' else False",
" literal = True if event['ResourceProperties']['CreateMode'] == 'plain-literal' else False",
" md5_hash = hashlib.md5(content).hexdigest()",
" with open('/tmp/' + file_name, 'w') as lambda_file:",
" lambda_file.write(content)",
" lambda_file.close()",
" s3 = boto3.resource('s3')",
" if create_zip == True:",
" output_filename = file_name + '_' + md5_hash + '.zip'",
" zf = zipfile.ZipFile('/tmp/' + output_filename, mode='w')",
" try:",
" zf.write('/tmp/' + file_name, file_name)",
" finally:",
" zf.close()",
" data = open('/tmp/' + output_filename, 'rb')",
" s3.Bucket(bucket_name).put_object(Key=output_filename, Body=data)",
" else:",
" if literal == True:",
" data = open('/tmp/' + file_name, 'rb')",
" s3.Bucket(bucket_name).put_object(Key=file_name, Body=content)",
" else:",
" extension = file_name.split(\".\")[-1]",
" output_filename = \".\".join(file_name.split(\".\")[:-1]) + '_' + md5_hash + '.' + extension",
" data = open('/tmp/' + file_name, 'rb')",
" s3.Bucket(bucket_name).put_object(Key=output_filename, Body=content)",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, { 'Message': output_filename } )",
" elif event['RequestType'] == 'Delete':",
" LOGGER.info('DELETE!')",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, { 'Message': 'Resource deletion successful!'} )",
" else:",
" LOGGER.info('FAILED!')",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, { 'Message': 'There is no such success like failure.'} )",
" except Exception as e: #pylint: disable=W0702",
" LOGGER.info(e)",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, { 'Message': 'There is no such success like failure.' } )",
"",
"def timeout_handler(_signal, _frame):",
" '''Handle SIGALRM'''",
" LOGGER.info('Time exceeded')",
" raise Exception('Time exceeded')",
"",
"signal.signal(signal.SIGALRM, timeout_handler)"
].fnjoin("\n")
}
end
r.property(:handler) { "index.lambda_handler" }
r.property(:role) { :lambda_execution_role.ref(:arn) }
r.property(:runtime) { "python2.7" }
r.property(:timeout) { "30" }
end
resource :lambda_execution_role,
type: "AWS::IAM::Role" do |r|
r.property(:assume_role_policy_document) do
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
}
end
r.property(:path) { "/" }
r.property(:policies) do
[
{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": %w(logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents),
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::*"
}
]
}
}
]
end
end

How to extract a specific object inside an jsonb column using postgres

My Jsonb column looks something like this:
table name:jsonb and column name:json_test
[
{
"run_id": "EXE20170822172151192",
"user_id": "12",
"log_level": "1",
"time_stamp": "2017-08-22T10:03:38.083Z",
***"test_case_id": "1073",
"test_suite_id": "null",
"test_case_name": "Gmail Flow",***
"test_suite_name": "",
"test_suite_abort": "",
"capture_screenshots": "Y",
"abort_at_step_failure": "Y"
"teststeps": [
{
"UItype": " UI ",
"action": " UI_Open_Browser ",
"param1": "Chrome",
"step_id": " 1",
"skip_to_step": " 0 ",
"skip_from_step": " 0 ",
"step_output_value": "true",
"step_execution_time": " 0:0:12:154 ",
"step_execution_status": "success",
"step_execution_end_time": " 2017-08-22 17:22:35:813 IST+0530 ",
"step_execution_start_time": " 2017-08-22 17:22:23:967 IST+0530 ",
"use_previous_step_output_data": " N ",
"execute_next_step_after_failure": " N ",
"skip_execution_based_on_prv_step_status": " F "
},
I wanna extract the objects from the json such as "test_case_id" "test_case_name" etc..
I tried using "jsonb_array_elements" but since starting of the jsonb is an array, I am not able to fetch the objects inside the array, can somebody help with this
if you fix the json (missing comma before "teststeps"), it works:
s=# with j as (select '[
{
"run_id": "EXE20170822172151192",
"user_id": "12",
"log_level": "1",
"time_stamp": "2017-08-22T10:03:38.083Z",
"test_case_id": "1073",
"test_suite_id": "null",
"test_case_name": "Gmail Flow",
"test_suite_name": "",
"test_suite_abort": "",
"capture_screenshots": "Y",
"abort_at_step_failure": "Y",
"teststeps": [
{
"UItype": " UI ",
"action": " UI_Open_Browser ",
"param1": "Chrome",
"step_id": " 1",
"skip_to_step": " 0 ",
"skip_from_step": " 0 ",
"step_output_value": "true",
"step_execution_time": " 0:0:12:154 ",
"step_execution_status": "success",
"step_execution_end_time": " 2017-08-22 17:22:35:813 IST+0530 ",
"step_execution_start_time": " 2017-08-22 17:22:23:967 IST+0530 ",
"use_previous_step_output_data": " N ",
"execute_next_step_after_failure": " N ",
"skip_execution_based_on_prv_step_status": " F "
}
]
}
]'::jsonb b)
select jsonb_array_elements(b)->'test_case_id' from j;
?column?
----------
"1073"
(1 row)

Visual studio code: How to create multilines indented snippets?

I try to create multilines snippet like so:
"mySnippet": {
"prefix": "test",
"body": [
"<div>",
"<p></p>",
"</div>"
]
}
But it's not indented...
Reading the doc doesnt' help
You need to add the indentation inside the second string element of the body array:
"mySnippet": {
"prefix": "test",
"body": [
"<div>",
" <p></p>",
"</div>"
]
}
You can use the \tfor TAB on each line:
"table": {
"prefix": "table",
"body": [
"<table cellspacing=\"0\" cellpadding=\"0\">",
"\t<tr>",
"\t\t<td>",
"\t\t\t<i class=\"fas fa-bullhorn\"></i>",
"\t\t</td>",
"\t\t<td>",
"\t\t\t${1:element}",
"\t\t</td>",
"\t</tr>",
"</table>"
],
"description": "Text with a bullhorn icon on the left"
}