Get shadow copies older than 5 days using PowerShell - powershell

I would like to get these shadow copies that were created more than 5 days ago. How could I do this using PowerShell?
cmd> Diskshadow
Diskshadow> List shadows all
* Shadow copy ID = {49fb469b-4940-45f7-98bd-08441e9e353c}
<No Alias>
- Shadow copy set: {32224b82-e802-4eab-a903-fb5dc6558800}
<No Alias>
- Original count of shadow copies = 11
- Original volume name: \\?\Volume{bba82744-b690-4b68-9180-c0d81
7c5a38f}\ [G:\]
- Creation time: 4/13/2021 6:03:34 PM
- Shadow copy device name: \\?\GLOBALROOT\Device\HarddiskVolumeS
hadowCopy313
- Originating machine: app.contoso.local
- Service machine: app.contoso.local
- Not exposed
- Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5}
- Attributes: No_Auto_Release Persistent Differential
* Shadow copy ID = {8ac42987-5f9a-4535-aef0-c6d64d7a658b}
* Shadow copy ID = {d9be01ee-c1e6-424f-ac9a-cf82ef4e5e58}
<No Alias>
- Shadow copy set: {32224b82-e802-4eab-a903-fb5dc6558800}
<No Alias>
- Original count of shadow copies = 11
- Original volume name: \\?\Volume{1120d149-97e5-4b8d-af19-bb243
38626ef}\ [H:\]
- Creation time: 4/13/2021 6:03:34 PM
- Shadow copy device name: \\?\GLOBALROOT\Device\HarddiskVolumeS
hadowCopy271
- Originating machine: app.contoso.local
- Service machine: app.contoso.local
- Not exposed
- Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5}
- Attributes: No_Auto_Release Persistent Differential

There is a module that can do this:
https://www.powershellgallery.com/packages/CPolydorou.ShadowCopy/1.1.2/Content/ShadowCopy.psm1

Related

cannot reach grafana loki port with http using traefik

I have been trying to find solutions to this but no luck. all services work internally. I am able to access grafana from browser with tls enabled but I am not able to reach loki port in any way(browser/postman etc.) but I can.
I can access to loki api with curl localy if I open port on for the service. but as I understand you need to expose ports from traefik to do that.
My compose file:
version: "3"
services:
grafana:
labels:
- "traefik.http.routers.grafana.entryPoints=port80"
- "traefik.http.routers.grafana.rule=host(`${DOMAIN}`)"
- "traefik.http.middlewares.grafana-redirect.redirectScheme.scheme=https"
- "traefik.http.middlewares.grafana-redirect.redirectScheme.permanent=true"
- "traefik.http.routers.grafana.middlewares=grafana-redirect"
# SSL endpoint
- "traefik.http.routers.grafana-ssl.entryPoints=port443"
- "traefik.http.routers.grafana-ssl.rule=host(`${DOMAIN}`)"
- "traefik.http.routers.grafana-ssl.tls=true"
- "traefik.http.routers.grafana-ssl.tls.certResolver=le-ssl"
- "traefik.http.routers.grafana-ssl.service=grafana-ssl"
- "traefik.http.services.grafana-ssl.loadBalancer.server.port=3000"
image: grafana/grafana:latest # or probably any other version
volumes:
- grafana-data:/var/lib/grafana
environment:
- GF_SERVER_ROOT_URL=https://${DOMAIN}
- GF_SERVER_DOMAIN=${DOMAIN}
- GF_USERS_ALLOW_SIGN_UP=false
- GF_SECURITY_ADMIN_USER=${GRAFANAUSER}
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANAPASS}
networks:
- traefik-net
loki:
image: grafana/loki
labels:
- "traefik.http.routers.loki-ssl.entryPoints=port3100"
- "traefik.http.routers.loki-ssl.rule=host(`${DOMAIN}`)"
- "traefik.http.routers.loki-ssl.tls=true"
- "traefik.http.routers.loki-ssl.tls.certResolver=le-ssl"
- "traefik.http.routers.loki-ssl.service=loki-ssl"
- "traefik.http.services.loki-ssl.loadBalancer.server.port=3100"
command: -config.file=/etc/loki/config.yaml
volumes:
- ./loki/config.yml:/etc/loki/config.yaml
- loki:/data/loki
networks:
- traefik-net
promtail:
image: grafana/promtail:2.3.0
volumes:
- /var/log:/var/log
- ./promtail:/etc/promtail-config/
command: -config.file=/etc/promtail-config/promtail.yml
networks:
- traefik-net
influx:
image: influxdb:1.7 # or any other recent version
labels:
# SSL endpoint
- "traefik.http.routers.influx-ssl.entryPoints=port8086"
- "traefik.http.routers.influx-ssl.rule=host(`${DOMAIN}`)"
- "traefik.http.routers.influx-ssl.tls=true"
- "traefik.http.routers.influx-ssl.tls.certResolver=le-ssl"
- "traefik.http.routers.influx-ssl.service=influx-ssl"
- "traefik.http.services.influx-ssl.loadBalancer.server.port=8086"
restart: always
volumes:
- influx-data:/var/lib/influxdb
environment:
- INFLUXDB_DB=grafana # set any other to create database on initialization
- INFLUXDB_HTTP_ENABLED=true
- INFLUXDB_HTTP_AUTH_ENABLED=true
- INFLUXDB_ADMIN_USER=&{DB_USER}
- INFLUXDB_ADMIN_PASSWORD=&{DB_PASS}
networks:
- traefik-net
traefik:
image: traefik:v2.9.1
ports:
- "80:80"
- "443:443"
- "3100:3100"
# expose port below only if you need access to the Traefik API
- "8080:8080"
command:
# - "--log.level=DEBUG"
- "--api=true"
- "--api.dashboard=true"
- "--providers.docker=true"
- "--entryPoints.port443.address=:443"
- "--entryPoints.port80.address=:80"
- "--entryPoints.port8086.address=:8086"
- "--entryPoints.port3100.address=:3100"
- "--certificatesResolvers.le-ssl.acme.tlsChallenge=true"
- "--certificatesResolvers.le-ssl.acme.email=${TLS_MAIL}"
- "--certificatesResolvers.le-ssl.acme.storage=/letsencrypt/acme.json"
volumes:
- traefik-data:/letsencrypt/
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik-net
volumes:
traefik-data:
grafana-data:
influx-data:
loki:
networks:
traefik-net:
loki conf
# (default configuration)
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
max_transfer_retries: 0 # Chunk transfers disabled
wal:
enabled: true
dir: /loki/wal
common:
ring:
instance_addr: 0.0.0.0
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: filesystem
filesystem:
directory: /loki/chunks
compactor:
working_directory: /loki/boltdb-shipper-compactor
shared_store: filesystem
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_burst_size_mb: 16
ingestion_rate_mb: 16
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
ruler:
storage:
type: local
local:
directory: /loki/rules
rule_path: /loki/rules-temp
alertmanager_url: localhost
ring:
kvstore:
store: inmemory
enable_api: true

can slot take entity values without a action function or forms in RASA?

is it possible to pass values in the entity to slots without form or writing an action function?
nlu.yml
nlu:
- intent: place_order
examples: |
- wanna [large](size) shoes for husky
- need a [small](size) [green](color) boots for pupps
- have [blue](color) socks
- would like to place an order
- lookup: size
examples: |
- small
-medium
-large
- synonym: small
examples: |
- small
- s
- tiny
- synonym: large
examples: |
- large
- l
- big
- lookup: color
examples: |
- white
- red
- green
domain.yml
version: "2.0"
intents:
- greet
- goodbye
- affirm
- deny
- mood_great
- mood_unhappy
- bot_challenge
- place_order
entities:
- size
- color
slot:
size:
type: text
color:
type: text
responses:
utter_greet:
- text: "Hey! can I assist you ?"
utter_order_list:
- text : "your order is {size} [color} boots. right?"
stories.yml
version: "2.0"
stories:
- story: place_order
steps:
- intent: greet
- action: utter_greet
- intent: place_order
- action: utter_order_list
debug output: it recognize entity , but the value is not passed to slot
Hey! can I assist you ?
Your input -> I would like to place an order for large blue shoes for my puppy
Received user message 'I would like to place an order for large blue shoes for my puppy' with intent '{'id': -2557752933293854887, 'name': 'place_order', 'confidence': 0.9996021389961243}' and entities '[{'entity': 'size', 'start': 35, 'end': 40, 'confidence_entity': 0.9921159148216248, 'value': 'large', 'extractor': 'DIETClassifier'}, {'entity': 'color', 'start': 41, 'end': 45, 'confidence_entity': 0.9969255328178406, 'value': 'blue', 'extractor': 'DIETClassifier'}]'
Failed to replace placeholders in response 'your order is {size} [color} boots. right?'. Tried to replace 'size' but could not find a value for it. There is no slot with this name nor did you pass the value explicitly when calling the response. Return response without filling the response
"slot" is an unknown keyword. you should write "slots" instead of "slot" in the domain file and it will work.

regex to search multiple pattern

Edited my code, tried a different approach to get the desired output.
Let me know if it's correct
import re
pattern1 = re.compile(r'\b(ERROR)')
pattern2 = re.compile(r'^\d+-\d+-\d+')
count =0
with open('sample.txt',encoding='utf-8')as f:
for i in f:
a= re.search(pattern1,i)
if a:
count = count + 1
b = re.search(pattern2,i)
if b:
print(b.group(),':',a.group())
print('Total ERROR in the logfile:',count)
***output:***
2019-11-22 : ERROR
2019-11-22 : ERROR
2019-11-20 : ERROR
Total ERROR in the logfile: 3
log.txt
2019-11-22 16:46:46,985 - main - INFO - Starting to Wait for Files
2019-11-22 16:46:56,645 - main - INFO - Starting: Attempt 1 Checking for New Files
2019-11-22 16:47:46,488 - main - INFO - Success: Downloading the Files from Cloud Storage: Return
2019-11-22 16:48:48,180 - main - ERROR - Failed: Waiting for files the Files
2019-11-22 16:49:17,918 - main - INFO - Starting to Wait for Files
2019-11-22 16:49:32,160 - main - INFO - Starting: Attempt 1 Checking for New Files
2019-11-22 16:49:39,329 - main - WARNING - Success: Downloading the Files from Cloud Storage:
2019-11-22 16:53:30,706 - main - WARNING - Starting to Wait for Files
2019-11-22 16:53:48,180 - main - ERROR - Failed: Waiting for files the Files
2019-11-20 10:00:00,121 - main - ERROR - Failed: Waiting for files the Files
The pattern you should be using to match error lines is:
^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.*\bERROR\b.*$
Your updated script:
pattern1 = re.compile(r'^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.*\bERROR\b.*$')
count = 0
with open('log.txt',encoding='utf-8')as f:
for i in f:
a = re.search(pattern1, i)
if a:
count = count + 1

Why Does Podman Report "Not enough IDs available in namespace" with different UIDs?

Facts:
Rootless podman works perfectly for uid 1480
Rootless podman fails for uid 2088
CentOS 7
Kernel 3.10.0-1062.1.2.el7.x86_64
podman version 1.4.4
Almost the entire environment has been removed between the two
The filesystem for /tmp is xfs
The capsh output of the two users is identical but for uid / username
Both UIDs have identical entries in /etc/sub{u,g}id files
The $HOME/.config/containers/storage.conf is the default and is identical between the two with the exception of the uids. The storage.conf is below for reference.
I wrote the following shell script to demonstrate just how similar an environment the two are operating in:
#!/bin/sh
for i in 1480 2088; do
sudo chroot --userspec "$i":10 / env -i /bin/sh <<EOF
echo -------------- $i ----------------
/usr/sbin/capsh --print
grep "$i" /etc/subuid /etc/subgid
mkdir /tmp/"$i"
HOME=/tmp/"$i"
export HOME
podman --root=/tmp/"$i" info > /tmp/podman."$i"
podman run --rm --root=/tmp/"$i" docker.io/library/busybox printf "\tCOMPLETE\n"
echo -----------END $i END-------------
EOF
sudo rm -rf /tmp/"$i"
done
Here's the output of the script:
$ sh /tmp/podman-fail.sh
[sudo] password for functional:
-------------- 1480 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=1480(functional)
gid=10(wheel)
groups=0(root)
/etc/subuid:1480:100000:65536
/etc/subgid:1480:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] could not find slirp4netns, the network namespace won't be configured: exec: "slirp4netns": executable file not found in $PATH
COMPLETE
-----------END 1480 END-------------
-------------- 2088 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=2088(broken)
gid=10(wheel)
groups=0(root)
/etc/subuid:2088:100000:65536
/etc/subgid:2088:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
ERRO[0003] Error pulling image ref //busybox:latest: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Failed
Error: unable to pull docker.io/library/busybox: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Here's the storage.conf for the 1480 uid. It's identical except s/1480/2088/:
[storage]
driver = "vfs"
runroot = "/run/user/1480"
graphroot = "/tmp/1480/.local/share/containers/storage"
[storage.options]
size = ""
remap-uids = ""
remap-gids = ""
remap-user = ""
remap-group = ""
ostree_repo = ""
skip_mount_home = ""
mount_program = ""
mountopt = ""
[storage.options.thinpool]
autoextend_percent = ""
autoextend_threshold = ""
basesize = ""
blocksize = ""
directlvm_device = ""
directlvm_device_force = ""
fs = ""
log_level = ""
min_free_space = ""
mkfsarg = ""
mountopt = ""
use_deferred_deletion = ""
use_deferred_removal = ""
xfs_nospace_max_retries = ""
You can see there's basically no difference between the two podman info outputs for the users:
$ diff -u /tmp/podman.1480 /tmp/podman.2088
--- /tmp/podman.1480 2019-10-17 22:41:21.991573733 -0400
+++ /tmp/podman.2088 2019-10-17 22:41:26.182584536 -0400
## -7,7 +7,7 ##
Distribution:
distribution: '"centos"'
version: "7"
- MemFree: 45654056960
+ MemFree: 45652697088
MemTotal: 67306323968
OCIRuntime:
package: containerd.io-1.2.6-3.3.el7.x86_64
## -24,7 +24,7 ##
kernel: 3.10.0-1062.1.2.el7.x86_64
os: linux
rootless: true
- uptime: 30h 17m 50.23s (Approximately 1.25 days)
+ uptime: 30h 17m 54.42s (Approximately 1.25 days)
registries:
blocked: null
insecure: null
## -35,14 +35,14 ##
- quay.io
- registry.centos.org
store:
- ConfigFile: /tmp/1480/.config/containers/storage.conf
+ ConfigFile: /tmp/2088/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
- GraphRoot: /tmp/1480
+ GraphRoot: /tmp/2088
GraphStatus: {}
ImageStore:
number: 0
- RunRoot: /run/user/1480
- VolumePath: /tmp/1480/volumes
+ RunRoot: /run/user/2088
+ VolumePath: /tmp/2088/volumes
I refuse to believe there's an if (2088 == uid) { abort(); } or similar nonsense somewhere in podman's source code. What am I missing?
Does podman system migrate fix there might not be enough IDs available in the namespace for you?
It did for me and others:
https://github.com/containers/libpod/issues/3421
AFAICT, sub-UID and GID ranges should not overlap between users. For reference, here is what the useradd manpage has to say about the matter:
SUB_GID_MIN (number), SUB_GID_MAX (number), SUB_GID_COUNT
(number)
If /etc/subuid exists, the commands useradd and newusers
(unless the user already have subordinate group IDs)
allocate SUB_GID_COUNT unused group IDs from the range
SUB_GID_MIN to SUB_GID_MAX for each new user.
The default values for SUB_GID_MIN, SUB_GID_MAX,
SUB_GID_COUNT are respectively 100000, 600100000 and 65536.
SUB_UID_MIN (number), SUB_UID_MAX (number), SUB_UID_COUNT
(number)
If /etc/subuid exists, the commands useradd and newusers
(unless the user already have subordinate user IDs) allocate
SUB_UID_COUNT unused user IDs from the range SUB_UID_MIN to
SUB_UID_MAX for each new user.
The default values for SUB_UID_MIN, SUB_UID_MAX,
SUB_UID_COUNT are respectively 100000, 600100000 and 65536.
The key word is unused.
CentOS 7.6 does not suport rootless buildah by default - see https://github.com/containers/buildah/pull/1166 and https://www.redhat.com/en/blog/preview-running-containers-without-root-rhel-76

How to comment on a specific line number on a PR on github

I am trying to write a small script that can comment on github PRs using eslint output.
The problem is eslint gives me the absolute line numbers for each error.
But github API wants the line number relative to the diff.
From the github API docs: https://developer.github.com/v3/pulls/comments/#create-a-comment
To comment on a specific line in a file, you will need to first
determine the position in the diff. GitHub offers a
application/vnd.github.v3.diff media type which you can use in a
preceding request to view the pull request's diff. The diff needs to
be interpreted to translate from the line in the file to a position in
the diff. The position value is the number of lines down from the
first "##" hunk header in the file you would like to comment on.
The line just below the "##" line is position 1, the next line is
position 2, and so on. The position in the file's diff continues to
increase through lines of whitespace and additional hunks until a new
file is reached.
So if I want to add a comment on new line number 5 in the above image, then I would need to pass 12 to the API
My question is how can I easily map between the new line numbers which the eslint will give in it's error messages to the relative line numbers required by the github API
What I have tried so far
I am using parse-diff to convert the diff provided by github API into json object
[{
"chunks": [{
"content": "## -,OLD_TOTAL_LINES +NEW_STARTING_LINE_NUMBER,NEW_TOTAL_LINES ##",
"changes": [
{
"type": STRING("normal"|"add"|"del"),
"normal": BOOLEAN,
"add": BOOLEAN,
"del": BOOLEAN,
"ln1": OLD_LINE_NUMBER,
"ln2": NEW_LINE_NUMBER,
"content": STRING,
"oldStart": NUMBER,
"oldLines": NUMBER,
"newStart": NUMBER,
"newLines": NUMBER
}
}]
}]
I am thinking of the following algorithm
make an array of new line numbers starting from NEW_STARTING_LINE_NUMBER to
NEW_STARTING_LINE_NUMBER+NEW_TOTAL_LINESfor each file
subtract newStart from each number and make it another array relativeLineNumbers
traverse through the array and for each deleted line (type==='del') increment the corresponding remaining relativeLineNumbers
for another hunk (line having ##) decrement the corresponding remaining relativeLineNumbers
I have found a solution. I didn't put it here because it involves simple looping and nothing special. But anyway answering now to help others.
I have opened a pull request to create the similar situation as shown in question
https://github.com/harryi3t/5134/pull/7/files
Using the Github API one can get the diff data.
diff --git a/test.js b/test.js
index 2aa9a08..066fc99 100644
--- a/test.js
+++ b/test.js
## -2,14 +2,7 ##
var hello = require('./hello.js');
-var names = [
- 'harry',
- 'barry',
- 'garry',
- 'harry',
- 'barry',
- 'marry',
-];
+var names = ['harry', 'barry', 'garry', 'harry', 'barry', 'marry'];
var names2 = [
'harry',
## -23,9 +16,7 ## var names2 = [
// after this line new chunk will be created
var names3 = [
'harry',
- 'barry',
- 'garry',
'harry',
'barry',
- 'marry',
+ 'marry', 'garry',
];
Now just pass this data to diff-parse module and do the computation.
var parsedFiles = parseDiff(data); // diff output
parsedFiles.forEach(
function (file) {
var relativeLine = 0;
file.chunks.forEach(
function (chunk, index) {
if (index !== 0) // relative line number should increment for each chunk
relativeLine++; // except the first one (see rel-line 16 in the image)
chunk.changes.forEach(
function (change) {
relativeLine++;
console.log(
change.type,
change.ln1 ? change.ln1 : '-',
change.ln2 ? change.ln2 : '-',
change.ln ? change.ln : '-',
relativeLine
);
}
);
}
);
}
);
This would print
type (ln1) old line (ln2) new line (ln) added/deleted line relative line
normal 2 2 - 1
normal 3 3 - 2
normal 4 4 - 3
del - - 5 4
del - - 6 5
del - - 7 6
del - - 8 7
del - - 9 8
del - - 10 9
del - - 11 10
del - - 12 11
add - - 5 12
normal 13 6 - 13
normal 14 7 - 14
normal 15 8 - 15
normal 23 16 - 17
normal 24 17 - 18
normal 25 18 - 19
del - - 26 20
del - - 27 21
normal 28 19 - 22
normal 29 20 - 23
del - - 30 24
add - - 21 25
normal 31 22 - 26
Now you can use the relative line number to post a comment using github api.
For my purpose I only needed the relative line numbers for the newly added lines, but using the table above one can get it for deleted lines also.
Here's the link for the linting project in which I used this. https://github.com/harryi3t/lint-github-pr