How can I tell what Scala version a .class file was compiled with? - scala

How can I tell what Scala version a .class file was compiled with?

I suppose the information is stored in the "pickled" part of the .class file, according to the 2008 "Reflecting Scala" rapport, from Yohann Coppel, under the supervision of Prof. Martin Odersky.
During the compilation process (represented on fig. 2), the Scala compiler generates two types of data.
The first one is some classic Java bytecode, which can be read and executed by a standard Java virtual machine.
The second one is what is called “Pickled data”, and represents the basic structure of the original source file.
This information is enclosed in a .class file.
The Java bytecode specification allows the compiler to “define and emit class files containing new attributes in the attributes tables of class file structures”. These attributes are silently ignored by JVMs if they do not recognize them.
The Scala compiler generates pickled data for about any data structure in a Scala program, called symbols in the pickler context.
Symbols are stored linearly with the format shown on Fig. 3.
The tag represents the type of data stored,
then the length gives the length of the following data block.
The data block can contain multiple information,
such as the name of a symbol.
ScalaSig = "ScalaSig" Version Symtab
Version = Major_Nat Minor_Nat <====
Symtab = numberOfEntries_Nat {Entry}
The ScalaSig attribute definition.
A more complete definition can be found in the scala.tools.nsc.symtab.PickleFormat source file (now scala.reflect.internal.pickling.PickleFormat).
You can also see how to read the Pickled data in scala.tools.nsc.util.ShowPickled.
This page shows a script (not tested) which will display the pickled data:
#!/bin/sh
#
# Shows the pickled scala data in a classfile.
if [ $# == 0 ] ; then
echo "Usage: $0 [--bare] [-cp classpath] <class*>"
exit 1
fi
TOOLSDIR=`dirname $0`
CPOF="$TOOLSDIR/cpof"
PACK="$TOOLSDIR/../build/pack/lib"
QUICK="$TOOLSDIR/../build/quick/classes"
STARR="$TOOLSDIR/../lib"
CP=""
if [ -f "${PACK}/scala-library.jar" ] ; then
CP=`${TOOLSDIR}/packcp`
elif [ -d "${QUICK}/library" ] ; then
CP=`${TOOLSDIR}/quickcp`
else
CP=`${TOOLSDIR}/starrcp`
fi
if [ "$1" == "-cp" ] ; then
shift
CP="${1}:${CP}"
shift
fi
java -cp "$CP" scala.tools.nsc.util.ShowPickled $*

You can see the Scala Major/Minor version in the class file if you use javap with the verbose option. For example, the following is shown for a file compiled using scala 2.8.0 final:
javap -private -verbose T
Compiled from "SomeTest.scala"
public interface T
SourceFile: "SomeTest.scala"
ScalaSig: length = 0x3
05 00 00
RuntimeVisibleAnnotations: length = 0xB
00 01 00 06 00 01 00 07 73 00 08
minor version: 0
major version: 49
Constant pool:
const #1 = Asciz SourceFile;
const #2 = Asciz SomeTest.scala;
const #3 = Asciz s;
const #4 = Asciz ()Ljava/lang/String;;
const #5 = Asciz ScalaSig;
//etc etc...
while the following is the output of a file compiled using scala 2.7.7:
javap -verbose T2
Compiled from "SomeTest2.scala"
public interface T2
SourceFile: "SomeTest2.scala"
ScalaSig: length = 0x87
04 01 1B 06 08 01 02 FFFFFF84 FFFFFF90 FFFFFF80 FFFFFF91 00 05 02 02 54
32 0A 01 03 01 07 3C 65 6D 70 74 79 3E 03 00 13
02 00 06 10 02 07 0C 0D 01 08 0A 02 09 0A 01 04
6C 61 6E 67 0A 01 0B 01 04 6A 61 76 61 09 02 0D
08 02 06 4F 62 6A 65 63 74 08 05 0F 00 FFFFFF86 00 10
01 01 73 15 01 11 10 02 12 18 0E 02 13 16 0D 01
14 0A 01 15 01 05 73 63 61 6C 61 09 02 17 14 01
06 50 72 65 64 65 66 09 02 19 1A 02 06 53 74 72
69 6E 67 0A 02 17 14
minor version: 0
major version: 49
Constant pool:
const #1 = Asciz SourceFile;
const #2 = Asciz SomeTest2.scala;
//etc etc...
The first two bytes of the ScalaSig constant entry should represent the scala Major/Minor version, I believe, which are defined in PickleFormat. The 2.7.7 version of PickleFormat can be found here, and shows that the major/minor version differs from the 2.8.0 version.
I checked the 2.7.1 version of this class as well, but here the Major/Minor version is the same as the 2.7.7 one, so you may not be able to distinguish between minor scala versions by using this method.

Most probably you could parse the .class file and read the version from an attribute attached from the scala compiler onto the class file.
To learn more about the existance of such an attribute you might start at the sources of the scala compiler ( http://lampsvn.epfl.ch/trac/scala/browser/scala/trunk/src/compiler/scala/tools/nsc/backend/jvm/GenJVM.scala ).
To learn how to parse a .class file you might read in the spec ( http://jcp.org/aboutJava/communityprocess/final/jsr202/index.html ).
The example code I posted here ( Java Illegal class modifiers Exception code 0x209 ) might help at the implementation, too .

FWIW, Here's a version of VonC's script that sets the classpath to scala-library.jar and scala-compiler.jar
Tested under cygwin and linux, with scala 2.11.8 and 2.12.1,
Should work under OSX.
Doesn't seem to like --bare argument, however.
(requires scala to be in your PATH.)
#!/bin/bash
# Shows the pickled scala data in a classfile.
if [ $# == 0 ] ; then
echo "Usage: $0 [--bare] [-cp classpath] <class*>"
exit 1
fi
unset JAVA_TOOL_OPTIONS
[ -z "$SCALA_HOME" ] && SCALA_HOME=$(which scala | sed -e 's#/bin/scala##')
export OSTYPE=$(uname | tr '[A-Z]' '[a-z]' | sed -e 's#[_0-9].*##')
case $OSTYPE in
cygwin) SEP=";" ;;
*) SEP=":" ;;
esac
CP="${SCALA_HOME}/lib/scala-library.jar${SEP}${SCALA_HOME}/lib/scala-compiler.jar${SEP}${SCALA_HOME}/lib/scala-reflect.jar"
if [ "$1" == "-cp" ] ; then
shift
CP="${1}${SEP}${CP}"
shift
fi
java -cp "$CP" scala.tools.nsc.util.ShowPickled $*

Related

How to retrieve details of the console port used by BIOS using efivars?

As part of installation of linux, I would like to set the "console device properties"(example, console=ttyS0,115200n1) via the kernel cmdline for Intel based platform.
There is No VGA console, only serial consoles via COM interface.
On these systems BIOS already has the required settings to interact using the appropriate serial port.
I see that EFI has variables ConIn, ConOut, ConErr which I am able to see from /sys/firmware/efi but unable to decode the contents of it.
Is it possible to identify which COM port is being used by the BIOS by examining the efi variables.
Example, of the EFI var on my box.
root#linux:~# efivar -p -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-ConOut
GUID: 8be4df61-93ca-11d2-aa0d-00e098032b8c
Name: "ConOut"
Attributes:
Non-Volatile
Boot Service Access
Runtime Service Access
Value:
00000000 02 01 0c 00 d0 41 03 0a 00 00 00 00 01 01 06 00 |.....A..........|
00000010 00 1a 03 0e 13 00 00 00 00 00 00 c2 01 00 00 00 |................|
00000020 00 00 08 01 01 03 0a 18 00 9d 9a 49 37 2f 54 89 |...........I7/T.|
00000030 4c a0 26 35 da 14 20 94 e4 01 00 00 00 03 0a 14 |L.&5.. .........|
00000040 00 53 47 c1 e0 be f9 d2 11 9a 0c 00 90 27 3f c1 |.SG..........'?.|
00000050 4d 7f 01 04 00 02 01 0c 00 d0 41 03 0a 00 00 00 |M.........A.....|
00000060 00 01 01 06 00 00 1f 02 01 0c 00 d0 41 01 05 00 |............A...|
00000070 00 00 00 03 0e 13 00 00 00 00 00 00 c2 01 00 00 |................|
00000080 00 00 00 08 01 01 03 0a 18 00 9d 9a 49 37 2f 54 |............I7/T|
00000090 89 4c a0 26 35 da 14 20 94 e4 01 00 00 00 03 0a |.L.&5.. ........|
000000a0 14 00 53 47 c1 e0 be f9 d2 11 9a 0c 00 90 27 3f |..SG..........'?|
000000b0 c1 4d 7f ff 04 00 |.M.... |
root#linux:~#
The contents of the ConOut variable are described in the UEFI specification - current version (2.8B):
3.3 - globally defined variables:
| Name | Attribute | Description |
|---------|------------|------------------------------------------------|
| ConOut | NV, BS, RT | The device path of the default output console. |
For information about device paths, we have:
10 - Protocols — Device Path Protocol:
Apart from the initial description of device paths, table 44 shows you the Generic Device Path Node structure, from which we can start decoding the contents of the variable.
The type of the first node is 0x02, telling us this node describes an ACPI device path, of 0x000c bytes length. Now jump down to 10.3.3 - ACPI Device Path and table 52, which tells us 1) that this is the right table (subtype 0x01) and 2) that the default ConOut has a _HID of 0x0a03410d and a _UID of 0.
The next node has a type of 0x01 - a Hardware Device Path, described further in 10.3.2, in this case table 46 (SubType is 0x01) for a PCI device path.
The next node describes a Messaging Device Path of type UART and so on...
Still, this only tells you what UEFI considers to be its default console, SPCR is what an operating system is supposed to be looking at for serial consoles. Unfortunately, on X86 the linux kernel handily ignores SPCR apart from for earlycon. I guess this is what you're trying to work around. It might be good to start some discussion on kernel development lists about whether to fix that and have X86 work like ARM64.
In my case since I know that console port is a "Serial IOPORT",
I could get the details now as follows.
a. Get hold of the /sys/firmware/acpi/tables/SPC table.
b. Read the Address offset 44-52. Actually one the last two bytes suffice.
Reference:
a. https://learn.microsoft.com/en-us/windows-hardware/drivers/serports/serial-port-console-redirection-table states that
Base Address 12 40
The base address of the Serial Port register set described using the ACPI Generic Address Structure.
0 = console redirection disabled
Note:
COM1 (0x3F8) would be:
Integer Form: 0x 01 08 00 00 00000000000003F8
Viewed in Memory: 0x01080000F803000000000000
COM2 (Ox2F8) would be:
Integer Form: 0x 01 08 00 00 00000000000002F8
Viewed in Memory: 0x01080000F802000000000000

How do loop over the search results for a byte string and offset the resultant pointer (in WinDbg)?

I'm attempting to search for an arbitrarily long byte string in WinDbg and print out the address if an integer in the vicinity meets some criteria.
Pseudo-register $t0 contains the starting address I want to search.
Here's something that, based on the Windows docs, maybe could work (though it clearly doesn't).
.foreach (place { s -[1] #$t0 L?30000 00 00 00 00 00 20 00 00 }) { .if ( (place +0x8) <= 0x1388) { .printf "0x%x\n", place } }
Search
First, the search command doesn't quite work correctly. I only want the address of the match (not the data).
s -[1] #$t0 L?30000 00 00 00 00 00 20 00 00
The docs say that the 1 flag will only return the address. When I issue that command, WinDbg replies
^ Syntax error in 's -1 #$t0 L?30000 00 00 00 00 00 20 00 00 '
If I leave out the -1, it finds two matches.
What am I doing wrong here?
Condition
I don't think the condition is behaving the way I want. I want to look at the third dword starting at place, i.e. place+8, and verify that it's smaller than 5000 (decimal). The .if inside the .foreach isn't printing a meaningful value for place (i.e. the address returned from the search). I think it's dereferencing place first and comparing the value of that integer to 5000. How do I look at the value of, say, *(int*)(place+8)?
Documentation?
The docs are not helping me very much. They only have sparse examples, none of which correspond to what I need.
Is there better documentation somewhere besides MS's Hardware Dev Center?
you can start writing JavaScript for a more legible way of scripting
old way
0:000> s -b vect l?0x1000 4d
00007ff7`8aaa0000 4d 5a 90 00 03 00 00 00-04 00 00 00 ff ff 00 00 MZ..............
00007ff7`8aaa00d4 4d 90 80 d2 df f9 82 d3-4d 90 80 d2 52 69 63 68 M.......M...Rich
00007ff7`8aaa00dc 4d 90 80 d2 52 69 63 68-4c 90 80 d2 00 00 00 00 M...RichL.......
0:000> s -[1]b vect l?0x1000 4d
0x00007ff7`8aaa0000
0x00007ff7`8aaa00d4
0x00007ff7`8aaa00dc
using javascript
function search(addr,len)
{
var index = []
var mem = host.memory.readMemoryValues(addr,len)
for(var i = 0; i < len; i++)
{
if(mem[i] == 0x4d)
{
index.push(addr+i)
}
}
return index
}
executed will return address like which you can manipulate further
0:000> dx -r1 #$scriptContents.search(0x00007ff78aaa0000,1000)
#$scriptContents.search(0x00007ff78aaa0000,1000) : 140701160046592,140701160046804,140701160046812
length : 0x3
[0x0] : 0x7ff78aaa0000
[0x1] : 0x7ff78aaa00d4
[0x2] : 0x7ff78aaa00dc
improving the script a little to find something based on first result
we will try to find the index of Rich string that follows the character 'M'
modified script
function search(addr,len)
{
var index = []
var Rich = []
var result = []
var mem = host.memory.readMemoryValues(addr,len)
for(var i = 0; i < len; i++)
{
if(mem[i] == 0x4d)
{
index.push(addr+i)
var temp = host.memory.readMemoryValues(addr+i+4,1,4)
host.diagnostics.debugLog(temp +"\t")
if(temp == 0x68636952)
{
Rich.push(addr+i)
}
}
}
result.push(index)
result.push(Rich)
return result
}
result only the third occurance of char "M" is followed by Rich string
0:000> dx -r2 #$scriptContents.search(0x00007ff78aaa0000,1000)
3 3548576223 1751345490 #$scriptContents.search(0x00007ff78aaa0000,1000) : 140701160046592,140701160046804,140701160046812,140701160046812
length : 0x2
[0x0] : 140701160046592,140701160046804,140701160046812
length : 0x3
[0x0] : 0x7ff78aaa0000
[0x1] : 0x7ff78aaa00d4
[0x2] : 0x7ff78aaa00dc
[0x1] : 140701160046812
length : 0x1
[0x0] : 0x7ff78aaa00dc
0:000> s -b vect l?0x1000 4d
00007ff7`8aaa0000 4d 5a 90 00 03 00 00 00-04 00 00 00 ff ff 00 00 MZ..............
00007ff7`8aaa00d4 4d 90 80 d2 df f9 82 d3-4d 90 80 d2 52 69 63 68 M.......M...Rich
00007ff7`8aaa00dc 4d 90 80 d2 52 69 63 68-4c 90 80 d2 00 00 00 00 M...RichL.......
load the extensension jsprovider.dll .load jsprovider
write a script say foo.js
load the script .scriptload ...\path\foo.js
execute any functions inside the js you wrote with dx #$scriptContents.myfunc(myargs)
see below using cdb just for ease of copy paste windbg works just as is
F:\>type mojo.js
function hola_mojo ()
{
host.diagnostics.debugLog("hola mojo this is javascript \n")
}
F:\>cdb -c ".load jsprovider;.scriptload .\mojo.js;dx #$scriptContents.hola_mojo();q" cdb | f:\usr\bin\grep.exe -A 6 -i reading
0:000> cdb: Reading initial command '.load jsprovider;.scriptload .\mojo.js;dx #$scriptContents.hola_mojo();q'
JavaScript script successfully loaded from 'F:\mojo.js'
hola mojo this is javascript
#$scriptContents.hola_mojo()
quit:
If I read this part of the documentation
s [-[[Flags]Type]] Range Pattern
correctly, you cannot leave out Type when specifying flags. That's because the flags are inside two square brackets. Otherwise it would have been noted as s [-[Flags][Type]] Range Pattern.
Considering this, the example works:
0:000> .dvalloc 2000
Allocated 2000 bytes starting at 00ba0000
0:000> eb 00ba0000 01 02 03 04 05 06 07 08 09
0:000> eb 00ba1000 01 02 03 04 05 06 07 08 09
0:000> s -[1]b 00ba0000 L?2000 01 02 03 04 05 06 07 08
0x00ba0000
0x00ba1000
Also note that you'll have a hidden bug for the use of place: it should be ${place}. By default, that will work with the address (line break for readability on SO):
0:000> .foreach (place {s -[1]b 00ba0000 L?2000 01 02 03 04 05 06 07 08 })
{ .if ( (${place} +0x8) < 0xba1000) { .printf "0x%x\n", ${place} } }
0xba0000
In order to read a DWord from that address, use the dwo() MASM oerator (line break for readability on SO):
0:000> .foreach (place {s -[1]b 00ba0000 L?2000 01 02 03 04 05 06 07 08 })
{ .if ( (dwo(${place} +0x8)) < 0xba1000)
{ .printf "0x%x = 0x%x\n", ${place}, dwo(${place}+8) } }
0xba0000 = 0x9
0xba1000 = 0x9

APDU: "Conditions of use not satisfied" (69 85) while calculate signature

With a smart card Gemalto (IAS ECC), I would to calculate a signature by using private key stored on smart card. For this, I use APDU commands:
// Verify PIN
00 20 00 01 04 31 32 33 34
-> 90 00
// Create a context for security operation
00 22 41 B6 06 84 01 84 80 01 12
-> 90 00
// Set the hash of the document
00 2A 90 A0 14 HASH OF DOCUMENT
-> 69 85
// Calculating the signature
00 2A 9E 9A 80
-> 69 85
My problem is the following: the las two commands return the error code "69 85", meaning "Conditions of use not satisfied".
I have already tried several solutions, but I obtain always the same error. How to resolve it? What does this code can mean?
After some tests, I discovered something interesting. When I replace cla "00" by "10", smart card returns a different response:
// Create a context for security operation
00 22 41 B6 06 84 01 84 80 01 12
// Verify PIN
00 20 00 01 04 31 32 33 34
// Calculating the signature (I replace "00" by "10")
10 2A 9E 9A 23 30 21 30 09 06 05 2B 0E 03 02 1A 05 00 04 14 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 12 13 14 15
I don't know if it's the good solution because smart card returns "90 00". But, it would return the content of my signature!
Thank you for your help!
Best regards
You are getting SW 6985 for
// Set the hash of the document
00 2A 90 A0 14 HASH OF DOCUMENT
-> 69 85
Since you have not set the correct context in current security environment.
Let me explain this below
First you performed VERIFY PIN command which was successful
// Verify PIN
00 20 00 01 04 31 32 33 34
-> 90 00
Then you performed MSE SET command,Where you set the security context.For this you have to understood how SE works(Please refer to section 3.5 fron IAS ECC v1.01).
At the time of personalisation, the Personaliser agent create SDO(Secure Data Object) inside the card.The reference to this SDO are mentioned in SE(Security Environment) in form of CRT(Control reference template).
// Create a context for security operation
00 22 41 B6 06 84 01 84 80 01 12
-> 90 00
Generally speaking, MSE SET command will always return SW 900 even if the SDO reference is wrong. Since it only return SW 6A80 when the template is wrong not when the reference is wrong.(The SDO reference is passed in tag 84)
After that you performed PSO HASH command
// Set the hash of the document
00 2A 90 A0 14 HASH OF DOCUMENT
-> 69 85
where the card return SW 6985(Condition of use not satisfied), This indicate the algorithm and SDO reference used for calculating Hash may wrong. Which is probably happening since the SDO reference which was sent during the time of MSE SET command is not available
Detecting error coming from MSE SET could be tricky since it return SW 9000.
For these type of situation you have to check the personalisation file carefully and need to match the MSE SET command with regard to SDO reference and supported ALGOs.
It may be useful to put the default context (e.g., cryptographic algorithms or
security operations) into the current SE in order to have few exchanges of MSE set commands.

Sending a trap with Perl's Net::SNMP

I'm trying to send a trap as part of a larger Perl script. I've copied the trapsending code to another file, and am running it by itself. The code seems to think the trap sends successfully, yet I'm not seeing the trap on either machine that I have a trap listener running on.
Here's the code:
#! /usr/local/bin/perl
use strict;
use warnings;
use Net::SNMP;
#messy hardcoding
my $snmp_target = '192.168.129.50';
#my $snmp_target = '10.200.6.105'; # Server running trap listener
my $enterprise = '1.3.6.1.4.1.27002.1';
my ($sess, $err) = Net::SNMP->session(
-hostname => $snmp_target,
-version => 1, #trap() requires v1
);
if (!defined $sess) {
print "Error connecting to target ". $snmp_target . ": ". $err;
next;
}
my #vars = qw();
my $varcounter = 1;
push (#vars, $enterprise . '.' . $varcounter);
push (#vars, OCTET_STRING);
push (#vars, "Test string");
my $result = $sess->trap(
-varbindlist => \#vars,
-enterprise => $enterprise,
-specifictrap => 1,
);
if (! $result)
{
print "An error occurred sending the trap: " . $sess->error();
}
EDIT: Added $sess->debug(255) call, here's the output:
debug: [440] Net::SNMP::Dispatcher::_event_insert(): created new head and tail [ARRAY(0x1af1fea8)]
debug: [687] Net::SNMP::Message::send(): transport address 192.168.129.50:161
debug: [2058] Net::SNMP::Message::_buffer_dump(): 70 bytes
[0000] 30 44 02 01 00 04 06 70 75 62 6C 69 63 A4 37 06 0D.....public.7.
[0016] 09 2B 06 01 04 01 81 D2 7A 01 40 04 C0 A8 81 85 .+......z.#.....
[0032] 02 01 06 02 01 01 43 01 00 30 1B 30 19 06 0A 2B ......C..0.0...+
[0048] 06 01 04 01 81 D2 7A 01 01 04 0B 54 65 73 74 20 ......z....Test
[0064] 73 74 72 69 6E 67 string
debug: [517] Net::SNMP::Dispatcher::_event_delete(): deleted [ARRAY(0x1af1fea8)], list is now empty
EDIT: Can anyone running a trap listener try this code on their machine and let me know if it works?
EDIT: Tried it from my MBP. Same result. Then noticed that the debug info says it is sending to port 161. Forced -port => 162 parameter, and it works. That leaves me with a couple of questions:
Why does the trap sender default to 161?
I get this error when I run with debug on. What does it mean?
error: [97] Net::SNMP::Transport::IPv4::UDP::agent_addr(): Failed to disconnect: Address family not supported by protocol family
Fixed by changing 'Port' setting from default 161 to 162.

Insert shell code

I got a small question.
Say I have the following code inside a console application :
printf("Enter name: ");
scanf("%s", &name);
I would like to exploit this vulnerability and enter the following shell code (MessageboxA):
6A 00 68 04 21 2F 01 68 0C 21 2F 01 6A 00 FF 15 B0 20 2F 01
How can I enter my shell code (Hex values) through the console ?
If I enter the input as is, it treats the numbers as chars and not as hex values.
Thanks a lot.
You could use as stdin a file with the desired content or use the echo command.
Suppose your shell code is AA BB CC DD (obviously this is not a valid shellcode):
echo -e "\xAA\xBB\xCC\xDD" | prog