How can i import .C(c language) and .Pl(perl) extension module exploits into the Metasploit framework?
Metasploit generally accept the .rb (ruby) extension modules?
Can anyone provide tutorials to import these extension modules? I read about the immunity debugger, but I don't understand the way to convert it. Immunity debugger is used to code exploits.
I just want to import the below shellcode in metasploit framework.This code is written in C language. So is there any way to import the below exploit into metasploit framework.
http://www.exploit-db.com/exploits/1/
There is No Sofware Or Tool which can convert the exploit from C/C++ or the others into the metasploit framework, it's easy and you can do it with learning some essentials of ruby. then take a note of important values from the exploit and put the values in the metasploit's exploit frame. values such as Return Address, EIP and the buffer size is required, no need to include the payload.
Related
So, I have a project where I need to program a Real time system on the microbit using Ada https://blog.adacore.com/ada-on-the-microbit
I've come accross a problem, by using the arm-elf library and compiler I seem to lose access to all Ada base libraries, that is, the only one I can use is Ada.Text_IO, all others can't seem to be found by the IDE
I want to debug my code, printing the data I'm receiving from the accelerometer, but it's a number, and the library Ada.Text_IO only works with strings, so I tried to use Ada.Integer_Text_IO which was not found.
But if I change in project settings to the ada base compiler, I can compile and build my code (which means the code is correct), but I'm missing the button to flash it to the microbit
Well, the runtime provided for the MicroBit is a ZFP which means Zero FootPrint runtime.
So you shouldn't expect all the standard library to be implemented... But expect that there's nothing :)
In fact, you only have what exists in the Ada drivers library.
Moreover, what would be IO on such a microcontroller ? Where do you expect it to output ?
If you want to output something, take a look at this example and use Image attribute of your number.
Let's say i have a huge lib in C++ (with tons of dependencies, it needs about 3h for a full build under GCC). I want to build upon that lib but don't want to do so in C++ but rather in a more productive language. How can i actually bridge or wrap that extern lib package so i can access it in another language and program on top of it?
Languages considered:
Swift
Go
What i found is, that both languages do provide auto bridging or wrapping for C libs and code (I don't actually know whats the difference between wrapping / bridging). So, if i have some c code, i can just throw it in the same Swift or Go project and can use it with a simple import in my project.
This doesn't work in both languages for C++ code however. So i googled how to transform C++ libs to C code or generate autowrappers. I found the following:
swig.org - auto wrapper for C++ libs
Comeau C++ compiler - automatically transfers C++ to C code
LLVM - should be able to take any input and transform it to any output that LLVM is capable of.
Question:
Is it even in the realms of usable / realistic / managable to build
on top of such a huge lib in other languages like Swift / Go, if
using auto wrapping or auto bridging?
What of the 3 listed libs / programs / frameworks works best for the process of C++ -> C (because Swift and Go both provide C auto
wrapping).
Are there better alternatives than what i considered so far?
Would it be better to just "stick with C++" as using any other tools to do the wrapping / bridging process would be far to much
work to equal out the benefit of using a more productive language
like Swift / Go?
Thanks:)
Disclaimer: There is also the possibility to manually wrap a C++ lib in C but that would take an unbearable amount of work for such a huge lib.
Q1: Is it realistic?
Not realistic, because any large complicated C++ interop is going to get too complicated. Automatic tools are likely to fail and manual work is too hard.
Q2: What's best?
I don't know and given A1 it does not seem to matter.
Q3: Alternative?
Q4: Is C++ only the best alternative?
If you want to take advantage of existing C++ code from another language regardless of the language involved the best option in complex scenarios is to use a hybrid approach.
Most languages provide interop to C and not C++ due to non-standard C++ naming convention. In other words, just about every language provides access to plain C-functions, but C++ is frequently not supported.
Since your library is complex, the best solution would be based on "Facade" pattern. Create a new C-library and implement application specific logic that utilizes C++ library. Try to design this library to be as thin as possible. The goal is not to write all business logic, but to provide C-functions that hold on C++ objects and call C++ functions. The GO-level language code would then call this library to use C++ library underneath. This approach differs from Q1 approach. In Q1 you attempt to have one interop call on per C++ function or object's method. In Facade you attempt to implement C++ usage scenarios that are unique to your application.
With Facade you reduce the scope of interop work, because you target your application scenarios. At the same time you mitigate away from C++ complexity at GO language level.
For example, you need to read a temperature sensor using C++ library.
In C++ you'd have to do:
open file
read stream until you find SLIP terminator
read one "record"
close file
With facade you create a single function called "readTemperature(deviceFileName)" and that C function executes 4 calls at once.
That's a fake example, just to show the point.
With facade you might want to hide original C++ objects and at this point it becomes a small layer. The goal here is to stay focused and balance your application needs with generalization to support your application.
Interestingly enough Facade approach is a way to improve interop performance. Interop in just about every language is more expensive than normal operations due to need to marshal from langauage runtime environment and keep it protected. Lots of interop calls slow down application (we are talking about millions here). For example, having 10 interop calls combined into 1 improves performance, because amount of itnerop operations is reduced.
I was successful wrapping a large (although perhaps not "huge") C++ library (hundreds of header files) in Swift using a relatively simple process. You directly link your project to the library. The only thing you have to wrap are any new functions that you write (to be invoked in Swift) that actually use the library (in the C++ wrapper file). The verbose stuff can be left in the wrapper file, mostly without any modification. There is a simple little tutorial which helped me: https://www.swiftprogrammer.info/swift_call_cpp.html
(FYI, there is one step he omitted: Set your library search paths in Build Settings => Search Paths => Library Search Paths (both Debug and Release) )
Is there a way to get the functionality of a prefix header in Swift? I don´t want to import external libs in every file where they are used.
No. But you don't need it — there's no cost to import UIKit beyond the time it takes you to type twelve characters. (Or use an Xcode New File template that has them there already.)
That's the TLDR. For the whole story, read on...
In (Obj)C, the old way to make API available for use in a source code file was textual inclusion. The preprocessor would see your #import <Foundation/Foundation.h> directive and copy all the text from that header file (and from any other headers it includes, and the headers they include, and so on) into your source file before passing it off to the compiler. As you might expect, recompiling thousands of lines of system header declarations for each file in a project wasn't so performant.
So, we got precompiled headers some years ago—you'd put your common #imports in one place, and the compilation step for those parts would be done once, with a result that the compiler backend could reuse for each file in your project. But that had its problems, too: there's a maintenance burden to keeping your PCH happy, and it doesn't let you restrict the namespace used in each source file (i.e. if you want one .m file in your project to see only the symbols it needs to use, and not all the other stuff used elsewhere in your project).
And on top of that, textual inclusion has an underlying fragility problem. If you #define something above your #import lines, and that define changes a symbol used in the imported headers, those headers will have compile errors (or fail in more subtle ways, like defining the wrong API). There are conventions to keep that from happening, but conventions aren't enforced — you're always a typo / new team member / bad merge away from everything falling apart.
Anyway, textual inclusion wasn't so great, even with precompiled headers, so in Xcode 5 Apple introduced Modules. (Actually, not just Apple. They're in the LLVM/Clang compiler suite, so they're open source.) Modules are based on semantic import, not textual inclusion — that is, a module tells the compiler at an abstract level what API symbols it makes available to your source code, rather than pasting in the text of those symbols' definitions — so they're not fragile, and they're individually precompiled on the back end so building your project stays fast.
Modules are the default for ObjC projects now. (Notice that if you create a new ObjC project, it doesn't include a precompiled header anymore. You can turn modules off, so if you have an old project you might still be using textual inclusion and precompiled headers.) You can find out more about ObjC modules in Session 404 from WWDC 2013.
Why all this business about ObjC? We're talking Swift, right? Well, Swift is based on a lot of the same infrastructure.
Swift uses modules from the start, so it's always based on semantic import. That means there's no build-time performance hit and no fragility. All that Swift import does is tell the compiler what symbols you need (and the linker where to find them when producing your binary executable).
So the only cost to putting the same imports at the top of every file is the typing. And that's a necessary cost — in Swift, the source file is a semantic unit, and there's real meaning to deciding what goes into it. For example, the behaviors of many of the Swift standard library types change if you import Foundation, to enable bridging with Cocoa collection and value types — if there's a part of your app that wants to work strictly with Swift collection and value types, you might not want to import Foundation (or Cocoa or UIKit or something else that includes it).
Update: Furthermore, what you choose import in a Swift file has real meaning.
For example, how the compiler optimizes generics and static/dynamic dispatch depends on what declarations are visible in a given file, so if you import more than you need to, you may generate slower code. So generally, it's best to import only what you need.
Explicit imports also help with clarity and readability. If imports were project-wide, then when you copy-paste code out of one project and into another you'd see lots of errors in the new location... and it'd be a lot less clear what imports you need to resolve them.
"But I hate putting the same several imports at the top of every file all the time," you say. Let's think about that a little.
Do you really need several? Most modules transitively import their dependencies. You don't need to import Foundation if you're already importing Cocoa (OS X) or UIKit (iOS/tvOS/watchOS). And if you're writing a SpriteKit or SceneKit game, for example, you automatically get UIKit/Cocoa (for whichever platform) and Foundation for free.
Do you really need the same in every file? Sure, you're in a UIKit project so you're using UIKit almost everywhere. But that's just one import, twelve characters at the top. Maybe your project is also using Contacts or Photos or CoreBluetooth or HealthKit... but it probably doesn't need to use all of those in every single type you define. (If it does, your code probably suffers from poor separation of concerns.)
Are you really managing import statements all the time? I don't know about your projects, but in most large projects I've worked on, I'd say at least 90% of the development activity involves editing existing source files, not creating new ones... after starting up work on a project or major feature, very seldom are we (re)defining the set of source files or their dependencies. And there are shortcuts that can help with (among other things) setting up imports, like Xcode file templates.
Create a Objective-C Bridging Header file:
[New File→iOS→Source→Header File]: Bridging-Header.h
Go to this new header and import your external libs:
#import Module1Name;
#import Module2Name;
...
Go to Build Settings, set the path of Objective-C Bridging Header:
[Target→Build Settings→Swift Compiler - Code Generation→Objective-C Bridging Header]: $(SRCROOT)/.../Bridging-Header.h
Then you can use external libs in every file without import code.
References:
Third Party Swift Frameworks
Importing Objective-C into Swift
There is a -enable-bridging-pch feature. But it seems that not working in Xcode 9 :(
I decided to write this ansewer just to fully cover the topic.
You can create module that will import yours dependencies and import only it.
For example call Core. It will contain only single swift file with imports.
Every import should begin with #_exported keyword.
For example:
#_exported import UIKit;
#_exported import Combine;
I would like to know which programming languages can be called by matlab.
for example I am quiet sure that matlab can use C function and maybe java.
I need this stuff for an industrial project so I need something that works well.
For example I have found some tutorial to call python function in matlab but they look to me not very good and stable solution.
I am not an expert of the field and my knowledge of matlab is very limited. So please be patience with the answer.
This project is related to machine learning and the software will probably run on a cluster.
EDIT according to this post Embedding Python in MATLAB seems that there are problem when importing numpy using python.
The only reason to use python in this environment is the numpy library.
Without that it is almost useless to me.
Do you think that I will encounter similar problem using java or c calling some mathematics libraries?
Most interfaces are listed here:
http://www.mathworks.de/de/help/matlab/external-interfaces.html
For Python I see different solutions:
COM on Windows platforms. This requires to register an application, check if this is possible and allowed on the cluster.
XMLRPC or SOAP. You may need to use Java-Classes in Matlab, but as you already realised this is very simple. Verify that the cluster has a Java VM available, many run matlab without java.
You can embed python code into c, which allows you to write mex functions which run c code: http://docs.python.org/2/extending/embedding.html
Use the command line interface for python.
Besides the documented limitations, I don't see any problem with these solution. If you are familiar with C or Matlab, I would choose the second or third option. This allows you to write a wrapper to access python with a very fundamental knowledge of matlab.
There are two main methods to achieve this:
Write MEX functions, which is basically C/C++ or Fortran methods, which use Matlab-specific API. You can then call these methods like you would any function written in M-language file. This is described here.
Call external libraries written in Java, .Net, C/C++, and COM servers. This is described here.
Both methods require good understanding of what you are doing, although I would argue that writing MEX function is much harder then referencing an existing library.
I'm not sure about programming language but You can do it by EXE file by running run('C:\someCompiledProgram.exe')
And if you need result you can use:
[status, result] = system('command')
You can read about it here:
http://nf.nci.org.au/facilities/software/Matlab/techdoc/ref/system.html
Since I prefer small files, I typically place a single "public" class per Python module. I name the module with the same name as the class it contains. So for example, the class ToolSet would be defined in ToolSet.py.
Within a package, if another module needs to instanciate an object of class ToolSet, I use:
from ToolSet import ToolSet
...
toolSet = ToolSet()
instead of:
import ToolSet
...
toolSet = ToolSet.ToolSet()
I do this to reduce "stutter" (I prefer to have stutter at the top of the file than within my code.)
Is this a correct idiom?
Here is a related question. Within a package, I often have a small number of classes that I would like to expose to the outside world. These I import inside the __init__.py for that package. For example, if ToolSet is in package UI and I want to expose it, I would put the following in UI/__init__.py :
from ToolSet import ToolSet
So that, from an external module I can write
import UI
...
toolSet = UI.ToolSet()
Again, is this pythonic?
To answer your first question, that is the idiom I use, and its use is supported by PEP8 the python style guide
it's okay to say this though:
from subprocess import Popen, PIPE
I like it as it reduces typing and makes sure that things go wrong immediately the file is run (say you mis-spelt an import) rather than some time later when a function using the import is run.
Eg suppose the module Thing doesn't have a Thyng member
from Thing import Thyng
Goes wrong immediately you run the .py file, whereas
import Thing
# ...
def fn():
Thing.Thyng()
Doesn't go wrong until you run fn()
As for your second question, I think that is also good practice. It often happens to me when I factor a single large.py into a directory with an __init__.py and implementation files. Importing things into the __init__.py keeps the interface the same. It is common practice in the standard libraries too.
Yes. Both are idiomatic Python in my opinion.
I tend to use the from module import name form for some modules in the standard library, such as datetime, but mostly for closely related modules, or names that are frequently used in the module. For example, I typically import ORM classes in this way.
I tend to use the import module form for some standard modules (especially os and os.path) and for names that are not very distinctive (like database.session and cherrypy.session being two different kind of sessions) and for name that are used infrequently where the mention of the module name improves readability.
In the end, there are a few rules of thumb (such as import os.path), but which form to use is largely a matter of judgement, taste and experience.
To address if it is pythonic, take a look at what is generally the definitive answer on the internet: http://effbot.org/zone/import-confusion.htm
Also take a look at ~unutbu's link which discusses this in greater detail.
I use from itertools import chain, ifilter, islice, izip all the time, because it allows me to program as though those were built-in methods, which to my way of thinking they really ought to be.
Once, in a frenzy of misguided correctness, I went through a big block of code and replaced from datetime import datetime with import datetime. This was a good example of Mark Twain's observation that a man who picks up a rat by the tail learns something that can be learned no other way. It certainly set me straight on why it's OK to use from x import y.