I'm implementing data deciphering into my Java application using javax.smartcardio APIs. I'm using Yubikey NEO smart card element. I managed to:
Select OpenPGP applet CW=9000.
Present the right PIN to the applet CW=9000.
Encrypt data using matching certificate using Bouncy Castle
The encrypted message is OK (or at lest usable). I successfully deciphered ASCII armored version of it it using gpg tool and the Yubikey.
I'm not able to replicate the same thing with Java.
My encrypted data length is 313 bytes
I'm sending two APDUs (Yubikey does not seem to support extended ones)
The result is CW=6f00
The key is 2048 bit long - I tried truncating the data to 256 bytes as mentioned in GPG source code but without any success.
The APDUs I'm using:
10 2a 80 86 ca 00 85 ..data.. d1 99 00 (208 bytes) cw=9000
00 2a 80 86 70 0f e9 ..data.. 71 85 00 (118 bytes) cw=6700
Related
I am generating a Pass Push Key for Apple (For the purposes of doing passes)
I Requested a Certificate from a Certificate Authority went to the dev portal and added the file that was created to Keychain.
I then exported it by searching in Keychain under Login and My Credentials, I selected both keys and exported it as P12.
When I exported it, it was missing the private key as shown below.
There are multiple tutorial on how to do this (https://code.google.com/archive/p/apns-sharp/wikis/HowToCreatePKCS12Certificate.wiki), (aps_developer_identity.cer to p12 without having to export from Key Chain?) and they all seem to fail with the same problem.
I have rebooted the entire machine, after importing the certificate, and I have created 5 different ones with there same problem
Was there an update in 2023
Bag Attributes
friendlyName: Pass Type ID: pass.generic.vaultie.io
localKeyID: F0 55 5E C3 AF 1F 69 F9 86 81 BC B5 9E AC 22 DA 26 81 03 F3
subject=/UID=pass.generic.vaultie.io/CN=Pass Type ID: pass.generic.vaultie.io/OU=6G63YAX437/O=Vaultie Inc./C=US
issuer=/CN=Apple Worldwide Developer Relations Certification Authority/OU=G4/O=Apple Inc./C=US
-----BEGIN CERTIFICATE-----
MIIGGTCCBQGgAwIBAgIQNxpLh........
-----END CERTIFICATE-----
Bag Attributes
friendlyName: With-KeyPair
localKeyID: F0 55 5E C3 AF 1F 69 F9 86 81 BC B5 9E AC 22 DA 26 81 03 F3
Key Attributes: <No Attributes>
I've looked at this previous question HAProxy health check and see that the HAProxy directives have changed significantly in this area. The "monitor" directive seems to be the modern way to do this.
I want to have a proxy running in tcp mode, that's capable of reporting its availability to clients.
I can have a separate listener in http mode, that gives a 200OK response:
frontend main
# See "bind" documentation at https://docs.haproxy.org/2.6/configuration.html#4.2-bind
# The proxy will listen on all interfaces for connections to the specified port.
# Connections MUST use the Proxy Protocol (v1 or v2).
# The proxy can ialso Listen on ipv4 and ipv6.
bind :::5000 accept-proxy
bind *:5000 accept-proxy
mode tcp
# Detailed connection logging
log global
option tcplog
# Only certain hosts (sending MTAs) can use this proxy, enforced via ACL
acl valid_client_mta_hosts src 127.0.0.1 172.31.25.101
tcp-request connection reject if !valid_client_mta_hosts
use_backend out
frontend health_check
mode http
bind :::5001
bind *:5001
monitor-uri /haproxy_test
log global # comment this out to omit healthchecks from the logs
however that seems to admit the possibility that 5001 might be up, but there's a problem with 5000.
Is there a way to enable monitoring directly of the mode tcp frontend with recent directives?
Here's a possible workaround:
Use a client that can add the proxy header, to ping the tcp front-end.
Make a request toward the proxy health service.
The source and dest of the request can be the "loopback" address.
./happie 35.90.110.253:5000 127.0.0.1:0 127.0.0.1:5001
Sending header version 2
00000000 0d 0a 0d 0a 00 0d 0a 51 55 49 54 0a 21 11 00 0c |.......QUIT.!...|
00000010 7f 00 00 01 7f 00 00 01 00 00 13 89 |............|
HTTP/1.1 200 OK
content-length: 58
cache-control: no-cache
content-type: text/html
<html><body><h1>200 OK</h1>
Service ready.
</body></html>
You can use track for health checks on different ports.
Example code
backend be_static
# more config options
server static_stor host:5000 track be_static_check_stor/static_check more_server_params
# check backend
backend be_static_check_stor
# more config options
server static_check host:5001 check more_server_params
I am not receiving a valid response when curling the REST annotated endpoint from the GRPC protobuf
I'm currently running the bookstore server from here
I've been able to hit the endpoint successfully via GRPC using the provided client.
$ python bookstore_client.py
ListShelves: shelves {
id: 1
theme: "Fiction"
}
shelves {
id: 2
theme: "Fantasy"
}
When I try to hit the corresponding REST endpoint, it gives me back a non-text (i.e. not JSON) response
$ curl --raw --http2 localhost:8000/v1/shelves 2>/dev/null | xxd
00000000: 0000 1804 0000 0000 0000 0400 4000 0000 ............#...
00000010: 0500 4000 0000 0600 0020 00fe 0300 0000 ..#...... ......
00000020: 0100 0004 0800 0000 0000 003f 0001 0000 ...........?....
00000030: 0806 0000 0000 0000 0000 0000 0000 00 ...............
I receive this response no matter what the uri is i.e. /v1/foobar gives the same result
Here are the relevant lines from the protobuf
rpc ListShelves(google.protobuf.Empty) returns (ListShelvesResponse) {
// Define HTTP mapping.
// Client example (Assuming your service is hosted at the given 'DOMAIN_NAME'):
// curl http://DOMAIN_NAME/v1/shelves
option (google.api.http) = { get: "/v1/shelves" };
}
I expected the same response that the python client gave me but I'm receiving a non-text response from the GRPC server.
In that example, port 8000 is the gRPC, not the REST endpoint.
To run the endpoint that follows the annotations you need to run the Extensible Service Proxy, from the docs:
"Cloud Endpoints supports protocol transcoding so that clients can access your gRPC API by using HTTP/JSON. The Extensible Service Proxy (ESP) transcodes HTTP/JSON to gRPC."
The REST endpoint will be served on a different port via ESP --http_port option.
If I want to alert the traffic with the snort rule alert:
Ethernet II, Src: Xircom_c5:7c:38 (00:10:a4:c5:7c:38), Dst: 3comCorp_a8:61:24 (00:60:08:a8:61:24)
Try to use:
alert tcp any any -> any any (content:"|00 60 08 a8 61 24|"; content:"|00 10 a4 c5 7c 38|"; nocase; msg:"Alert")
It looks not working.....
Snort does not work at MAC address level, it works with TCP, UDP, ICMP and IP protocols.
Your rule is a tcp rule and therefore will have a minimum 20 byte header, possibly up to 60 bytes depending on options.
Since snort content rules only match in the payload, this means that each of your content terms content:"|00 60 08 a8 61 24|" and content:"|00 10 a4 c5 7c 38|" will only match after the initial header (20 - 60 bytes).
First thank you for all the members of this great community.
I have some awkward problem. This page http://www.tophebergeur.com/hebergement/perl/ has TTFB of more than 40 seconds.
These are the info from http://www.webpagetest.org/result/150625_AS_188H/1/details/
Error/Status Code: 200
Client Port: 2034
Request Start: 0.426 s
DNS Lookup: 367 ms
Initial Connection: 59 ms
Time to First Byte: 34765 ms
Content Download: 21 ms
Bytes In (downloaded): 14.2 KB
Bytes Out (uploaded): 0.4 KB
But when I filter this list by country the problem of TTFB is gone /hebergement/perl/canada/
I was looking in the server logs but couldn't find where is the problem
Thanks in advance
The problem has been solved. Apparently it was a problem of Cache. I'M using CDN that failed to load the cache of this page. THank you all. And I apprecited your help :)