How does one send a REST request to an annotated GRPC endpoint? - rest

I am not receiving a valid response when curling the REST annotated endpoint from the GRPC protobuf
I'm currently running the bookstore server from here
I've been able to hit the endpoint successfully via GRPC using the provided client.
$ python bookstore_client.py
ListShelves: shelves {
id: 1
theme: "Fiction"
}
shelves {
id: 2
theme: "Fantasy"
}
When I try to hit the corresponding REST endpoint, it gives me back a non-text (i.e. not JSON) response
$ curl --raw --http2 localhost:8000/v1/shelves 2>/dev/null | xxd
00000000: 0000 1804 0000 0000 0000 0400 4000 0000 ............#...
00000010: 0500 4000 0000 0600 0020 00fe 0300 0000 ..#...... ......
00000020: 0100 0004 0800 0000 0000 003f 0001 0000 ...........?....
00000030: 0806 0000 0000 0000 0000 0000 0000 00 ...............
I receive this response no matter what the uri is i.e. /v1/foobar gives the same result
Here are the relevant lines from the protobuf
rpc ListShelves(google.protobuf.Empty) returns (ListShelvesResponse) {
// Define HTTP mapping.
// Client example (Assuming your service is hosted at the given 'DOMAIN_NAME'):
// curl http://DOMAIN_NAME/v1/shelves
option (google.api.http) = { get: "/v1/shelves" };
}
I expected the same response that the python client gave me but I'm receiving a non-text response from the GRPC server.

In that example, port 8000 is the gRPC, not the REST endpoint.
To run the endpoint that follows the annotations you need to run the Extensible Service Proxy, from the docs:
"Cloud Endpoints supports protocol transcoding so that clients can access your gRPC API by using HTTP/JSON. The Extensible Service Proxy (ESP) transcodes HTTP/JSON to gRPC."
The REST endpoint will be served on a different port via ESP --http_port option.

Related

HAProxy health check, particularly in mode tcp

I've looked at this previous question HAProxy health check and see that the HAProxy directives have changed significantly in this area. The "monitor" directive seems to be the modern way to do this.
I want to have a proxy running in tcp mode, that's capable of reporting its availability to clients.
I can have a separate listener in http mode, that gives a 200OK response:
frontend main
# See "bind" documentation at https://docs.haproxy.org/2.6/configuration.html#4.2-bind
# The proxy will listen on all interfaces for connections to the specified port.
# Connections MUST use the Proxy Protocol (v1 or v2).
# The proxy can ialso Listen on ipv4 and ipv6.
bind :::5000 accept-proxy
bind *:5000 accept-proxy
mode tcp
# Detailed connection logging
log global
option tcplog
# Only certain hosts (sending MTAs) can use this proxy, enforced via ACL
acl valid_client_mta_hosts src 127.0.0.1 172.31.25.101
tcp-request connection reject if !valid_client_mta_hosts
use_backend out
frontend health_check
mode http
bind :::5001
bind *:5001
monitor-uri /haproxy_test
log global # comment this out to omit healthchecks from the logs
however that seems to admit the possibility that 5001 might be up, but there's a problem with 5000.
Is there a way to enable monitoring directly of the mode tcp frontend with recent directives?
Here's a possible workaround:
Use a client that can add the proxy header, to ping the tcp front-end.
Make a request toward the proxy health service.
The source and dest of the request can be the "loopback" address.
./happie 35.90.110.253:5000 127.0.0.1:0 127.0.0.1:5001
Sending header version 2
00000000 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 0c |.......QUIT.!...|
00000010 7f 00 00 01 7f 00 00 01 00 00 13 89 |............|
HTTP/1.1 200 OK
content-length: 58
cache-control: no-cache
content-type: text/html
<html><body><h1>200 OK</h1>
Service ready.
</body></html>
You can use track for health checks on different ports.
Example code
backend be_static
# more config options
server static_stor host:5000 track be_static_check_stor/static_check more_server_params
# check backend
backend be_static_check_stor
# more config options
server static_check host:5001 check more_server_params

Haproxy ACL for query-string "Authorization"

I am trying to create ACL in Haproxy to query Authorization from request header and route to backend based on AccessID. I have used map file which are populated with AccessID and backend server. I am sure that my ACL is not working and hence I am getting 503 for incoming requests. Any help is appreciated!
Config File:
frontend main
bind *:80
capture request header Authorization len 50
acl GET_calls method GET HEAD OPTIONS
acl PUT_calls method PUT
use_backend %[urlp,map_sub(/etc/haproxy/PUT_Header.map)] if PUT_calls
Map File:
# AccessID backend server
JMYQ get_s1
P2BH get_s1
WEA1 get_s2
I have captured the request header in log and I see AccessID.
Apr 8 10:10:29 localhost haproxy[79517]: 0.11.4.1:929 [08/Apr/2022:10:10:29.232] main main/<NOSRV> -1/-1/-1/-1/0 503 212 - - SC-- 0/0/0/0/0 0/0 {Credential=WEA1} "PUT /common/Demo2.file HTTP/1.1"

Snort rules content for src and dsr address

If I want to alert the traffic with the snort rule alert:
Ethernet II, Src: Xircom_c5:7c:38 (00:10:a4:c5:7c:38), Dst: 3comCorp_a8:61:24 (00:60:08:a8:61:24)
Try to use:
alert tcp any any -> any any (content:"|00 60 08 a8 61 24|"; content:"|00 10 a4 c5 7c 38|"; nocase; msg:"Alert")
It looks not working.....
Snort does not work at MAC address level, it works with TCP, UDP, ICMP and IP protocols.
Your rule is a tcp rule and therefore will have a minimum 20 byte header, possibly up to 60 bytes depending on options.
Since snort content rules only match in the payload, this means that each of your content terms content:"|00 60 08 a8 61 24|" and content:"|00 10 a4 c5 7c 38|" will only match after the initial header (20 - 60 bytes).

Send raw Ethernet frame with custom data after EtherType using nping

I am using nping to send raw a Ethernet frame. I want to send a frame with custom data starting after the EtherType. However, nping puts the custom data in the middle of the packet. For example, here's my command:
nping --dest-mac <my mac> --ether-type 0xd2d2 -e eth0 --send-eth --data 00010028 192.168.2.10
and here's what I see on the receiver:
0x0000: 8cfd f000 cb16 9410 3eb8 483d d2d2 4500
0x0010: 0020 f412 0000 4001 0169 c0a8 0207 c0a8
0x0020: 020a 0800 9a72 5d61 0003 0001 0028 0000
0x0030: 0000 0000 0000 0000 0000 0000
In the third line I want the 6th and 7th half words, 0001 0028 to come after 0xd2d2
The custom data nping put is an IP header.
I'm not familiar with nping, but I guess the 192.168.2.10 you put at the end of your command is doing wrong. It's encoded at 16th and 17th half words (destination IP address) as c0a8 020a. Probably nping added the IP header because you specified 192.168.2.10.
Try the command without 192.168.2.10, or <my mac> instead of 192.168.2.10.

Decipher APDU for OpenPGP smart card applet

I'm implementing data deciphering into my Java application using javax.smartcardio APIs. I'm using Yubikey NEO smart card element. I managed to:
Select OpenPGP applet CW=9000.
Present the right PIN to the applet CW=9000.
Encrypt data using matching certificate using Bouncy Castle
The encrypted message is OK (or at lest usable). I successfully deciphered ASCII armored version of it it using gpg tool and the Yubikey.
I'm not able to replicate the same thing with Java.
My encrypted data length is 313 bytes
I'm sending two APDUs (Yubikey does not seem to support extended ones)
The result is CW=6f00
The key is 2048 bit long - I tried truncating the data to 256 bytes as mentioned in GPG source code but without any success.
The APDUs I'm using:
10 2a 80 86 ca 00 85 ..data.. d1 99 00 (208 bytes) cw=9000
00 2a 80 86 70 0f e9 ..data.. 71 85 00 (118 bytes) cw=6700