Java Card: Problems selecting app with APDUtool - eclipse

I'm using Eclipse with EclipseJCDE.
I made a simple java card applet as a .cap file to install on the simulator. I don't know if the installation failed because the download script is a bunch of ADPU commands which I don't understand. Is there any way to see what applets are currently on the simulator and what their AIDs are?
I then made a script for ADPUtool with just one command, selecting the applet. According to the .jca file in my project.
The AID for my applet:
0x1:0x2:0x3:0x4:0x5:0x6:0x7:0x8:0x9:0x0:0x0.
The command I made for selecting the applet:
0x00 0xA4 0x04 0x00 0x0b 0x1 0x2 0x3 0x4 0x5 0x6 0x7 0x8 0x9 0x0 0x0
The 0x00 0xA4 0x04 0x00 at the beginning is for the select command, then 0x0b for the length, than the AID, and then 0x0 at the end for the Le byte which I don't think matters for this command. When I run this script with the ADPU tool I get this:
CLA: 00
INS: a4
P1: 04
P2: 00
Lc: 0b 01 02 03 04 05 06 07 08 09 00 00
Le: 00
SW1: 6d
SW2: 00
I believe the SW1 and SW2 bytes are the response to my command and I think 6d means it didn't find or wasn't able to load the applet. What am I doing wrong?

6D00 means bad instruction (INS byte 'A4' not existing in class '00').
Post full trace of APDUs after ATR else I recommend you to check section 10 from http://www.etsi.eu/deliver/etsi_ts/102200_102299/102221/08.02.00_60/ts_102221v080200p.pdf, for example.

Related

Issue with PowerShell ConvertTo-Json adding un-printable characters to beginning of file

I am having a problem with a PowerShell ConvertTo-Json command. The resulting file has two non-printable characters as the first to characters of the file. Using Format-Hex, the return values are:
00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
00000000 FF FE 5B 00 0D 00 0A 00 20 00 20 00 20 00 20 00 .þ[..... . . . .
00000010 7B 00 0D 00 0A 00 20 00 20 00 20 00 20 00 20 00 {..... . . . . .
...
The bad characters are the FF and FE in the 00 and 01 positions. The command I am using to generate the file is the following:
$search.resources.attributes |select $Object.Attributes.ID |Sort-Object -Property displayname | ConvertTo-Json | Out-File ([Environment]::GetFolderPath("Desktop")+"\RL_Identities.json")
The $search is a result of an Invoke-RestMethod call.
I am importing this file into an Oracle CLOB column. This column has a CHECK (COLUMN_NAME IS JSON) check constraint and it fails with the two un-printable characters in the file. If I open the file in Notepad++, do a select all and copy/paste to a new file, the new file loads perfectly because it doesn't have the two characters at the beginning.
Is there any reason the two characters are there? Is it "feature" of the ConvertTo-Json command or could it be coming from the data in the Invoke-RestMethod call? If there is no way to prevent these characters from being there, is there a way to programmatically remove the first two bytes of the file?
Your file uses the "Unicode" (UTF-16LE) character encoding, which is what you get by default in Windows PowerShell when you use > / Out-File.
The first two bytes, FF and FE, make up the so-called BOM (byte-order mark), aka Unicode signature, which identifies the encoding.
Instead, use Set-Content (or -Out-File) with the -Encoding parameter to specify the desired encoding.
The caveat is that if you need UTF-8, -Encoding utf8 in Windows PowerShell creates UTF-8 files with a BOM, which not all consumers understand.
In PowerShell (Core) 7+, by contrast, you get BOM-less UTF-8 by default, across all cmdlets (and therefore also with >).
If you're on Windows PowerShell and need to create a BOM-less UTF-8 file, you can use the following workaround via New-Item:
# Creates out.txt with BOM-less UTF-8 encoding.
# Note that the -Value argument must be a single, potentially multi-line
# string and a trailing newline is NOT added.
$null = New-Item -Force out.txt -Value (
ConvertTo-Json 'hü'
)
The sample ConvertTo-Json call results in verbatim "hü". Passing the resulting file to Format-Hex shows that the file has no BOM and that ü (LATIN SMALL LETTER U WITH DIAERESIS, U+00FC) is correctly encoded as UTF-8 byte sequence 0xC3, 0xBC:
Path: C:\Users\jdoe\out.txt
00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
00000000 22 68 C3 BC 22 "hü"

cephfs seems to have a problem with linux 5.11 kernels

Upgraded fedora33 lately and found my cephfs mounts won't work anymore. After hours of debugging and looking around, I realized a new kernel 5.11.X was installed. Before I had 5.10.X. Did reboot with 5.10 and everything was fine. To verify the kernel version is the problem I installed a recent ubuntu 21.04 with kernel 5.11.0: showed the same problem. Now I have fixed my kernel to boot to 5.10 and I can live with that, but there seems to be a serious problem with > 5.10 kernels.
I'm using octopus. Any ideas?
Adding ms_mode=legacy does not help.
When I try to mount I get lot's of kernel logs starting with:
Apr 26 09:22:15 ubuntu kernel: libceph: no match of type 2 in addrvec
Apr 26 09:22:15 ubuntu kernel: libceph: corrupt full osdmap (-2) epoch 64001 off 3154 (0000000073edcb82 of 00000000aaa67e88-00000000ea93de62)
Apr 26 09:22:15 ubuntu kernel: osdmap: 00000000: 08 07 72 20 00 00 09 01 9e 12 00 00 86 bb d6 c5 ..r ............
Apr 26 09:22:15 ubuntu kernel: osdmap: 00000010: ae 96 4c 78 8a 5e 50 62 3f 0a e5 24 01 fa 00 00 ..Lx.^Pb?..$....
Apr 26 09:22:15 ubuntu kernel: osdmap: 00000020: 54 f0 53 5d 3a fd ae 0e 3e ea 85 60 07 ab 94 2b T.S]:...>..`...+
Apr 26 09:22:15 ubuntu kernel: osdmap: 00000030: 06 00 00 00 02 00 00 00 00 00 00 00 1d 05 44 01 ..............D.
Apr 26 09:22:15 ubuntu kernel: osdmap: 00000040: 00 00 01 02 02 02 20 00 00 00 20 00 00 00 00 00 ...... ... .....
.....
Apr 26 09:22:15 ubuntu kernel: libceph: osdc handle_map corrupt msg
....
Magnus
I can confirm this. I'm booting my linux with 5.10.X and it works well but when I switch to the 5.11.X I have the corrupt message and cannot attach my rbd volumes.
Something is wrong with it. Can you open an issue to ceph and post here the issue, please?

Control USART RTS pin from driver on embedded board

I'm porting the lirc_serial kernel module to work on our embedded board. We only need to implement the IR transmitter function.
For only the transmitter, the custom driver need only control the RTS pin on /dev/ttyS0, from within the module.
On standard hardware, the driver loads:
00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
NEW APPROACH
I have not figured out how to deactivate the current driver for the serial port, so rather than create a new driver, how would you use the current 8250-dw driver to change the RTS pin on behalf of my kernel module? Something like this?
Using line disciplines per this article on the slip driver looked promising, Just take slip.c and remove the network side of the code. But it needs a user space program (slattach or dip) to open /dev/ttyS0 and activate the line discipline.
Is that possible (or a good idea) from within a kernel module?
In this similar question, How do I open/write/read a uart device from a kernel module?, Ian Abbott suggested backporting serdev to kernel 4.9.
That's getting a bit involved and we're already behind schedule. Is there an easier way?
ORIGINAL QUESTION
However, the embedded board (based on the BayTrail Atom E3845) has the serial port controller in memory mapped I/O:
80860F0A:00: ttyS0 at MMIO 0x90a0c000 (irq = 39, base_baud = 2764800) is a 16550A
80860F0A:01: ttyS1 at MMIO 0x90a0e000 (irq = 40, base_baud = 2764800) is a 16550A
I'm new to driver development. I guess 0x90a0c000 is the physical address of the controller?
To probe the module, I first remapped 0x90a0c000 to a virtual address using ioremap_nocache and then tried to reserve the memory using request_mem_region. That failed.
ioVirtBase = ioremap_nocache(iommap, 8);
TQTRACE("ecp_serial_probe: devm_ioremap for MMIO 0x%X returned 0x%X\n", (uint32_t)iommap, (uint32_t)ioVirtBase);
if (ioVirtBase != NULL)
{
tqDumpBuffer(ioVirtBase, 8);
}
tqRes = request_mem_region((uint32_t)ioVirtBase, 8, ECP_DRIVER_NAME);
TQTRACE("ecp_serial_probe: request_mem_region for 0x%X returned 0x%X\n", (uint32_t)ioVirtBase, (uint32_t)tqRes);
if (!tqRes)
{
TQTRACE("ecp_serial_probe: Cannot request memory at 0x%X\n", (uint32_t)iommap);
return -ENXIO;
}
Is this the correct order of the functions?
Also, it seems request_mem_region fails because the device is under control of 80860F0A ?? There is no such entry in lsmod but there is an entry in /sys/devices.
Do I need to unload that driver to control the USART? How?
# ls -l /sys/devices/platform/80860F0A\:00
lrwxrwxrwx 1 root root 0 Jul 8 23:30 driver -> ../../../bus/platform/drivers/dw-apb-uart
-rw-r--r-- 1 root root 4096 Jul 9 17:02 driver_override
lrwxrwxrwx 1 root root 0 Jul 9 17:14 firmware_node -> ../../LNXSYSTM:00/LNXSYBUS:00/80860F0A:00
-r--r--r-- 1 root root 4096 Jul 9 17:02 modalias
drwxr-xr-x 2 root root 0 Jul 9 17:02 power
lrwxrwxrwx 1 root root 0 Jul 8 23:30 subsystem -> ../../../bus/platform
drwxr-xr-x 3 root root 0 Jul 8 23:30 tty
-rw-r--r-- 1 root root 4096 Jul 9 17:02 uevent
drwxr-xr-x 3 root root 0 Jul 8 23:30 VCOM0001:00
dmesg output below. Dumping the data at the remapped virtual address is not consistent. Sometimes all 0xFF, other times, 00 00 00 00 41 02 1C 48. I don't understand that either...
MARK Tue Jul 9 17:45:35 SGT 2019
ecp_serial: ecp_serial_exit_module()
Spectre V2 : System may be vulnerable to spectre v2
ecp_serial: loading module not compiled with retpoline compiler.
ecp_serial: ecp_serial_init_module()
ecp_serial: ecp_serial_init()
ecp_serial: ecp_serial_probe() iommap=0x90A0C000
ecp_serial: ecp_serial_probe: devm_ioremap for MMIO 0x90A0C000 returned 0xE3296000
ecp_serial: Dump address 0xE3296000:
00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
0000 FF FF FF FF FF FF FF FF
ecp_serial: ecp_serial_probe: request_mem_region for 0xE3296000 returned 0x0
ecp_serial: ecp_serial_probe: Cannot request memory at 0x90A0C000
platform ecp_serial.0: lirc_dev: driver ecp_serial registered at minor = 0
MARK Tue Jul 9 17:46:08 SGT 2019
ecp_serial: ecp_serial_exit_module()
Spectre V2 : System may be vulnerable to spectre v2
ecp_serial: loading module not compiled with retpoline compiler.
ecp_serial: ecp_serial_init_module()
ecp_serial: ecp_serial_init()
ecp_serial: ecp_serial_probe() iommap=0x90A0C000
ecp_serial: ecp_serial_probe: devm_ioremap for MMIO 0x90A0C000 returned 0xE32A2000
ecp_serial: Dump address 0xE32A2000:
00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
0000 00 00 00 00 41 02 1C 48
ecp_serial: ecp_serial_probe: request_mem_region for 0xE32A2000 returned 0x0
ecp_serial: ecp_serial_probe: Cannot request memory at 0x90A0C000
platform ecp_serial.0: lirc_dev: driver ecp_serial registered at minor = 0
What proc/iomem has to say
90a0c000-90a0cfff : 80860F0A:00
90a0e000-90a0efff : 80860F0A:01
So indeed, the memory is under control of another driver... But how to unload it if it is not listed in lsmod ?
# rmmod 80860F0A:00
ERROR: Module 80860F0A:00 does not exist in /proc/modules
# rmmod 80860F0A
ERROR: Module 80860F0A does not exist in /proc/modules
OS INFO
# uname -a
Linux ecp 4.4.127-1.el6.elrepo.i686 #1 SMP Sun Apr 8 09:44:43 EDT 2018 i686 i686 i386 GNU/Linux
# cat /etc/centos-release
CentOS release 6.6 (Final)

J1939 RTR Issue

I have and issue with rtr frames using candump and cansend.
Dumping the broadcasted data is no issue.
Architecture -
Raspberry pi with a pican shield reading data from a J1939 simulator.
I run candump to receive all messages on the bus. Then get an ack frame back from the simulator when I execute a cansend for pgn feec. Im requesting a preprogrammed VIN but I get nothing back. Here is what Im seeing from candump:
can0 18FEF500 [8] 7D FF FF 40 25 4B FF FF '}..#%K..'
can0 18FEE900 [8] D1 4B 03 00 D1 4B 03 00 '.K...K..'
can0 18FEF700 [8] FF FF FF FF E0 01 FF FF '........'
can0 18FECA00 [8] 03 FF 00 00 00 00 00 00 '........'
can0 00FEEC00 [0] remote request
can0 18E80000 [8] 01 FF FF FF FF EC FE 00 '........'
can0 0CF00300 [8] FF 7D 7D FF FF FF FF FF '.}}.....'
can0 18FE6C00 [8] FF FF FF FF FF FF 80 7D '.......}'
can0 0CF00400 [8] FF FF 7D 80 7D FF FF FF '..}.}...''
The E800 PGN is a standard ack message.
And message I am sending while candump is running:
cansend can0 00feec00#r
Basically, I'm not getting the PGN for VIN back. Any ideas?
Turns out there are a couple of issues here.
1- #r is not supported with J1939
2- you don't request pgns by asking for that pgn directly. the method is to send data to a specific pgn which handles requests. example below:
EA 00 is the PGN to send data to. Inside the data message lives the pgn we want to request (LSB) so PGN FEE5 is now E5FE. Three bytes are required which is why 00 is in the message below.
Here is the working request for Engine Hours:
cansend 18EA00FF#E5FE00
and the reponse:
21 00 00 00 8F 01 00 00

Why does my Perl CGI program return a server error?

I recently got into learning cgi and I set up an Ubuntu server in vbox. The first program I wrote was in Python using vim through ssh. Then I installed Eclipse on my Windows 7 station and created the exact same Perl file; just a simple hello world deal.
I tried running it, and I was getting a 500 on it, while the Python code in the same dir (/usr/lib/cgi-bin) was showing up fine. Frustrated, I checked and triple-checked the permissions and that it began with #!/usr/bin/perl. I also checked whether or not AddHandler was set to .pl. Everything was set fine, and on a whim I decided to write the same exact code within the server using vim like I did with the Python file.
Lo and behold, it worked. I compared them, thinking I'd gone mad, and they are exactly the same. So, what's the deal? Why is a file made in Windows 7 on Eclipse different than a file made in Ubuntu server with vim? Do they have different binary headers or something? This can really affect my development environment.
#!/usr/bin/perl
print "Content-type: text/html\n\n";
print "Testing.";
Apache error log:
[Tue Aug 07 12:32:02 2012] [error] [client 192.168.1.8] (2)No such file or directory: exec of '/usr/lib/cgi-bin/test.pl' failed
[Tue Aug 07 12:32:02 2012] [error] [client 192.168.1.8] Premature end of script headers: test.pl
[Tue Aug 07 12:32:02 2012] [error] [client 192.168.1.8] File does not exist: /var/www/favicon.ico
This is the continuing error I get.
I think you have some spurious \r characters on the first line of your Perl script when you write it in Windows.
For example I created the following file on Windows:
#!/usr/bin/perl
code goes here
When viewed with hexdump it shows:
00000000 23 21 2f 75 73 72 2f 62 69 6e 2f 70 65 72 6c 0d |#!/usr/bin/perl.|
00000010 0a 0d 0a 63 6f 64 65 20 67 6f 65 73 20 68 65 72 |...code goes her|
00000020 65 0d 0a |e..|
00000023
Notice the 0d - \r that I've marked out in that. If I try and run this using ./test.pl I get:
zsh: ./test.pl: bad interpreter: /usr/bin/perl^M: no such file or directory
Whereas if I write the same code in Vim on a UNIX machine I get:
00000000 23 21 2f 75 73 72 2f 62 69 6e 2f 70 65 72 6c 0a |#!/usr/bin/perl.|
00000010 0a 63 6f 64 65 20 67 6f 65 73 20 68 65 72 65 0a |.code goes here.|
00000020
You can fix this in one of several ways:
You can probably make your editor save "UNIX line endings" or similar.
You can run dos2unix or similar on the file after saving it
You can use sed: sed -e 's/\r//g' or similar.
Your apache logs should be able to confirm this (If they don't crank up the logging a bit on your development server).
Sure, it can.
One environment might have a module installed that the other might not.
Perl might be installed in different locations in the two environment.
The environments might have different versions of Perl.
The environments might have different operating systems.
The permissions might be setup incorrectly in one of the environments.
etc
But instead of speculating wildly like this, why don't you check the error log for what error you actually got?
No, they are just text files. Of course, it's possible to write unportable programs, trivially by using system() or other similar services which depend on the environment.