The white paper describes the technology of code protection for Linux applications, which is based on the so-called “Nanomite” approach applied previously for Windows systems.
It is one of the modern antidebugging methods that can be also effectively applied for the process antidumping.
Apriorit Code Protection for Linux is provided as commercial SDK with various types of licensing.
The project was written for Linux OS 32-bit applications. But the principles can easily be implemented for other operating systems, so further development is planned.
First, we will take a look at creating a custom debugger for Linux. After that, we will move on to the implementation of nanomites. Binutils and Perl are used for the compilation of the project.
We apply the combination of two techniques: Nanomites and Debug Blocker.
Nanomites are code segments, containing some key application logic, marked with specific markers in source files. Protector cuts such segments out from the protected program for packing. When unpacking, they are obfuscated, written to the allocated memory, and jumps replace them in the original code. The table of conditional and unconditional jumps is built, and it contains not only nanomite jumps abut also some non-existent "trash" ones. Such "completness" is a serious obstruction to recover this table.
Debug Blocker implements parent process protection. Protected program is started as a child process, and protector - parent process - attaches to it for debug. Thus, for a third party, it is possible to debug only parent process. Combined with nanomite technology, Debug Blocker creates reliable protection for an application, making its debugging and reversing very complicated and time-consuming.
Read more about Nanomite Technology in our white paper Nanomite and Debug Blocker Technologies: Scheme, Pros, and Cons
Both techniques were successfully used in commercial Windows protectors. Apriorit Code Protection is the first product to implement them for Linux application protection.
Apriorit Code Protection includes 2 main components:
Also we provide Nanomites Demo: a demo application protected by nanomites.
There’s also a script collection for adding the nanomites to an application and for creating nanomites tables.
An application with an –S key for creating an assembler listing is created;
The assembler listing is analyzed with Perl script. All jump and call instructions (e.g., jmp, jz, jne, call, etc.) are processed and replaced with instructionOffsetLabel(N): int 3;
After that, the user application, which consists of modified assembler listings, is compiled.
With the help of a Perl script, a compiled application is parsed and the table of nanomites is built.
Our debugger is based on the ptrace (process trace) system call, which exists in some Unix-like systems (including Linux, FreeBSD, Mac OS X). It allows tracing or debugging the selected process. We can say that ptrace provides the full control over a process: we may change the application execution flow, display and change values in memory or registry states. It should be mentioned that it provides us no additional permissions: possible actions are limited by the permissions of a started process. Moreover, when a program with setuid bit is traced, this bit doesn’t work as the privileges are not escalated.
After the demo application is processed with scripts, it is not independent anymore, and if it is started without a debugger, the «segmentation fault» appears at once. The debugger starts the demo application from now on. For this purpose, a child process is created in the debugger, and then parent process attaches to it. All debugging events from the child process are processed in a cycle. It includes all jump events; parent process analyzes nanomite table and flag table to perform correct action.
Armadillo (also known as SoftwarePassport) is a commercial protector developed for Windows application protection. It introduced nanomite approach, and also uses Debug Blocker technology (protection by parent process).
In Armadillo, the binary code is modified. That’s why when a 2-5 bytes long jump instruction is replaced with a shorter 1 byte long int 3 (0xcc) instruction, some free space remains. Correspondingly, we need to write the original jump instruction over int 3 to restore a nanomite.
We change the code on the sources level in our approach. That’s why the nanomite will be 1 byte long. Correspondingly, we won’t be able to restore the nanomite by writing the original instruction over it. And we cannot extend the code in the place of the nanomite as all relative jumps would be broken. But there is a way to restore our nanomites, for example the following.
A hacker can create an additional section in the executable file, then find the nanomite and obtain its jump instruction and jump address.
Then the restoration goes as follows:
Such solution is complex in implementation. Firstly, a disassembler engine is required for automation, secondly, the moved instructions may contain jump instructions with relative jumps, which will require corrections.
In this white paper, we will examine one of the modern antidebugging methods that is based on software nanomites technology. It’s also an effecient method of the process antidumping.
This approach was first introduced in the Armadillo protector for Windows applications.
This white paper concludes our Testing Process Improvement series and considers maybe one of the most difficult questions of test planning: how to estimate time required for the new tasks.
In this white paper, I will share our experience on how collaboration ot testers and developers can help to deal with the most complicated bugs.
In the next white paper of the series, I want to share our experience of introduction of the templates for the test request and test response as well as some improvements in the project communication process.
With this white paper, we continue to discuss some simple but still powerful innovations that can significantly improve testing process in your project team. The next tip I want to share with you is about how to remember all good ideas and embody them one day.
This White Paper from our Testing Process Improvements series describes Impact Analysis as a tool to estimate required scope for regression testing. One of our articles was devoted to this method, but here I will briefly remind what it is all about and how we use it in our project work.
Some time ago, in a conference, I heard such phrase: “At each moment, manager of the testing group must know who is occupied, what his/her tasks are, what the project status is, and what is planned to do”. Since that time I have been discovering more and more proofs that time wasting is a direct consequence of the situation when testing process fails to meet these requirements. Things are complicated also by the fact that we all are slaves to a habit: if there is some historically formed process in the project or company, everybody will continue working by it, just till the moment when somebody decides to make a revolution. In this series of the testing white papers I describe some problems we faced with in different projects, summarize and analyze them and then describe the innovations and improvement we introduced to resolve them in this endless journey to the perfect process.
Sometimes I notice such a situation in various projects: team members spend their time to some activity that seems to be necessary but the result discovers to be incommensurate with the spent effort. Often, it’s not because the people but because of the process, work organization in the project. We start the series of White Papers devoted to the practical problems of the software testing process organization that my colleagues or I faced with and also innovations that we provided to cope with them. These White Papers will be useful for Test Managers and Project Managers as well as QA and Testing Specialists that are interested in their project development and feel strength to introduce innovations.