flag Ukraine Stand with Ukraine

Partnering with an outsourcing company to deliver your software may seem risky, too expensive, or uncontrollable. In fact, you can shape the outsourcing process by choosing the most fitting pricing model.

Cloud computing and virtualization are the two main approaches organizations encounter when looking to optimize and modernize their IT infrastructure. However, to determine which will be the most suitable choice for your organization, you need to clearly understand how they work and how they differ.

When working with an outsourced project, a project manager (PM) needs to focus on dozens of issues simultaneously. They need to facilitate the project remotely, ensure efficient communication, and keep an eye on project resource use. Headhunting and keeping such an experienced and skilled employee in-house may be challenging and costly, especially for small and medium-sized businesses.

Hiring an outsourced PM as part of the development team is a great alternative to establishing in-house project management. An outsourced PM can improve team management, ensure a transparent development process, reduce your budget, and bring many more valuable benefits.

In this article, we showcase the benefits having a PM on the development side brings to a team and share our approach to project management. We also analyze common concerns about outsourcing project management and explain how we address them at Apriorit.


Outsourcing project management: 6 concerns and ways to address them

6 advantages of outsourcing project management

How do Apriorit PMs work?


Outsourcing project management: 6 concerns and ways to address them

Outsourcing software development has become an integral part of IT projects since it provides companies with numerous benefits: 

  • Access to a wide talent pool and niche expertise
  • Fast augmentation of an in-house development team
  • Cost savings

However, not everyone believes that hiring an outsourced project manager along with the development team can bring as many benefits as an in-house project manager. Usually, customers have such concerns of outsourcing project management:

6 concerns about hiring an outsourced PM
  1. Lack of operational control. Control over day-to-day project activities is what a customer delegates to an outsourced project manager. Some customers believe this may result in losing their authority over the development process, decisions made on the project, and, ultimately, the project’s outcome.

    How do we mitigate it? PMs at Apriorit do have more control over the development team than the customer does, which allows them to ensure that developers work to the best of their abilities. However, we always keep our customers updated on the project status and discuss critical decisions (development methodology, choice of technology, sprint goals, etc.) with them. Mostly, our PMs do this by producing daily or weekly status reports, facilitating regular online standups, and writing follow-up emails.

  2. Poor quality of the final product. Each company has its own approach and quality standards for software development. When outsourcing a project, it can be challenging to make sure a contractor is working to those standards and that the final product will meet all of the customer’s needs.

    How do we mitigate it? At Apriorit, we have strict internal coding standards that help us build software of the highest quality. Second, during the project discovery phase, our business analysts and PMs elicit customer requirements to define the product’s goals and functionality. Thus, we ensure that the final product will meet all of the customer’s needs, expectations, and standards.

  3. Lack of data security and confidentiality. A customer may consider providing confidential information on their project to a third-party PM insecure and consider that data to no longer be confidential.

    How do we mitigate it? While working on projects, our PMs and developers follow security best practices: they sign non-disclosure agreements, use secure credentials to access customers’ resources, audit the security of data forms and APIs, etc. We can perform a security audit on demand or involve an authorized third-party organization to do so to prove that we’re working to the highest security standards. 

  4. Low level of involvement in the customer’s business. In-house project managers usually know all the details of a customer’s business and industry that are relevant to the project. Outsourced PMs need a considerable amount of time to figure all that out. If they don’t research the customer’s business thoroughly, they can overlook industry standards, valuable stakeholders, etc.

    How do we mitigate it? At Apriorit, we have extensive experience delivering projects of various sizes to customers in various industries and countries. We generally assign PMs that have relevant knowledge and experience for particular projects. Also, our PMs constantly exchange experience and knowledge, which helps them be prepared for any challenge. If one PM has never worked with a particular type of business, they can always ask for advice from colleagues.

  5. Challenging communication. Outsourcing is often associated with communication issues related to time zone differences, cultural particularities, language, etc. Since a PM is the key communicator between the outsourced team, stakeholders, and in-house developers, challenges in interacting with them can affect the whole project.

    How do we mitigate it? Our PMs negotiate specific time frames and communication channels convenient for both the customer’s team and the outsourced team. Vast experience working with customers from around the world has taught us to be flexible and adapt to any conditions. 

  6. Additional costs. From the customer’s perspective, a PM may not appear as important as software developers — especially if the customer has in-house PMs with the same set of skills. In this case, hiring an outsourced PM may seem like an unreasonable project cost.

    How do we mitigate it? In our experience, hiring an outsourced PM saves a project’s budget in the long run. We’ve seen our PMs successfully mitigate risks of miscommunication, unclosed requirement gaps, and insufficient risk analysis. Also, good knowledge of the team allows our PMs to efficiently manage team resources and detect issues faster than a customer’s in-house PMs usually do.

As a software R&D outsourcing company, Apriorit has extensive experience providing project managers to our customers as part of our core offer. We believe that team management on the side of the outsourcing provider is beneficial for both the provider and the customer. Now let’s find out how exactly an outsourced PM can facilitate development.

Read also:
Benefits and Risks of Outsourcing Engineering Services

6 advantages of outsourcing project management

A project manager is a leader who should be able to prevent, tackle, and resolve any issues without breaking the deadlines. Moreover, as a leader, they play the role of a mentor, teacher, facilitator, and problem-solver for the team.

Our customers give us plenty of positive feedback on the performance of our PMs. Based on their reviews, we can outline these benefits of outsourcing project management as a part of the development team:

Key benefits of hiring a PM as part of an outsourced team
  1. High level of skills and experience. As they work with many customers in a variety of industries, outsourced companies usually have diverse experience in both project management and software development. Also, in an outsourcing company, the PM team is usually large enough to allow a customer to find a PM that has the relevant skills to successfully deliver their project. 
  2. Efficient team and resource management. An outsourced PM shares the development team’s time zone, location, language, and corporate culture. Also, an outsourced PM knows each member of their team well enough to understand their strengths and weaknesses and efficiently use the team’s skills to deliver the project on time.

    On one of our projects, a customer requested hiring more developers to quickly implement a range of additional features. Instead, the PM optimized the workload of the project team and paralleled some of the existing tasks to make sure the team would be able to handle new features without involving additional specialists. In this way, the PM saved the customer’s budget while meeting the deadlines.

  3. Saving the customer’s time during development. Solving small issues and managing day-to-day activities takes most of a PM’s time. When an outsourced PM is in charge of the project, the customer’s stakeholders and in-house PMs can focus on their internal business activities. This is especially useful for companies that don’t have any internal project managers. They’d need lots of time to organize the work of an outsourced development team by themselves. With a hired PM, they need only to answer the PM’s questions without studying the project management.
  4. Compliance with the schedule and budget. A PM’s core responsibility is to ensure that the development team meets the deadlines and doesn’t overuse project resources. Good knowledge of our developers, efficient resource management, and fast response to issues allow our PMs to lead projects to success within the estimated time and budget.
  5. Cost-effectiveness. Outsourcing is generally considered more cost-effective than keeping in-house employees. For PMs, this is also valid: hiring a project manager along with an outsourced development team is less costly than recruiting and training an additional in-house PM.
  6. A fresh perspective on your project. As someone from outside the project, an outsourced PM can look at it from a different angle and assess the project impartially. Such unbiased insights are crucial for evaluating risks and making business decisions. 

Read also:
Business Analyst Roles & Responsibilities in Software Development

To ensure that our customers feel all these advantages of outsourcing project management, our project managers have developed their own approach to handling projects. Let’s see how they do it. 

How do Apriorit PMs work?

Project managers at Apriorit have created a workflow that allows them to bring any project to a successful conclusion despite its size, industry, and possible challenges.

Project management workflow at Apriorit

A PM’s activities start by collecting and estimating software requirements provided by the customer or elicited by a business analyst, then creating a statement of work. This document is a product specification that describes the software functionality and implementation plan that the customer and outsourcing development company have agreed upon. The statement of work is useful for both parties: the customer uses it to understand how the product will be implemented and how it will operate, and the development team uses it to plan how to develop and deliver the product within a specified timeframe.

After that, the PM chooses a development methodology for the project. This choice is based on several factors:

  • Scope of work estimated and agreed between Apriorit and the customer
  • Required delivery frequency
  • Due dates
  • Resource structure and availability
  • Possible or planned future updates

At Apriorit, we prefer to use an Agile methodology for our projects as it allows for more flexibility and earlier delivery compared to other methodologies.

The next step is to choose relevant tools for several areas of the project:

  • Project planning. These are tools that help us create and visualize project plans with critical path timeframes and thus evaluate project delivery dates. We usually create project maps in Microsoft Project. For the periodic planning of iterative projects, we estimate story points using planning poker.
  • Status tracking and delivery control. Jira is our primary tool to track the status and progress of project activities. Also, we often employ bug trackers such as Asana and Azure DevOps. For comparatively long projects (three months and longer), we use earned value analysis to measure whether the project is being delivered successfully. 
  • Knowledge keeping. We create Confluence spaces to store information and project documentation. The whole team can share their knowledge, suggest ideas, or ask questions at any time. This helps us streamline project communication.

Although we have preferred tools for each PM activity, we always discuss them with the customer and development team to make sure everyone is comfortable with the PM’s choice. 

When the most suitable methodology and tools are agreed upon, the PM creates a communication plan that defines:

  • roles of stakeholders
  • communication channels
  • meeting and reporting frequency and format

This document is a must for long projects that involve at least several developers and stakeholders.

Read also:
Building an Effective Management Structure in a Service Company

It’s also up to a PM to organize cooperation between the customer’s team and Apriorit’s team. We often work on projects with distributed teams, where our developers integrate into or augment the customer’s team. In this case, our PMs pay extra attention to ensuring a smooth flow of information and processing of tasks. To do that, they consult with team leaders, figure out the skillsets of both teams, and distribute tasks according to team members’ skills.

If the project team is distributed between several time zones, it’s up to the PM to plan communication and use the overlapping time in the most efficient manner. In our projects, we arrange meetings to discuss the most important issues that need the customer’s attention. For all other questions, we use emails and status updates that the customer can read any time they want.

Once the cooperation scheme has been discussed and agreed on, the PM creates a project plan. It includes:

  • software requirements from the statement of work and elicitation sessions
  • iterations (if an iterative methodology was chosen)
  • estimates

When all the preparations are finished and all the documents are signed, the development team starts working. From this point, the PM’s task is to coordinate the project team and control project progress. The PM manages risks during development, identifying and troubleshooting threats before they materialize.

Another important part of the PM’s job is ensuring that the customer receives a high-quality product. In our projects, we start testing products in the early development stages and conduct user acceptance testing when we finish implementing requirements from the specification. A PM coordinates the development and testing teams to make sure they produce software according to the client’s requirements and within schedule.

Related services

Specialized Quality Assurance & Testing Solutions


At Apriorit, PMs always add great value to our dedicated development teams. More than 18 years of experience in outsourcing software development have convinced us and our customers that having a project manager on site with the development team significantly benefits a project.

Experience managing hundreds of projects in many industries and communicating with customers from all over the world helps Apriorit project managers lead any project to success.

Over the years, we’ve formulated our own approach to outsourcing project management as a part of our services. Contact us to get a well-managed and skillful development team for your next project!

Cloud computing services are on the rise and keep evolving. But it can be complicated to keep up with all the new terms along with the differences in infrastructure deployment.

Single-page applications (SPAs) have been sweeping the web development market like a hurricane over the last few years. Facebook, Google, Twitter, PayPal, Netflix, and other IT companies choose SPAs for their projects. SPAs are prized for their high performance, responsiveness, and smooth user experience. But do they really represent a new stage of web development and can they really displace traditional multi-page applications (MPAs)?

Today, large enterprises, small and mid-sized businesses, and even startups all over the globe use outsourced engineering services to bring their software development projects to life. An outside team is often a good solution for optimizing project costs, supplementing available resources, acquiring rare expertise, and shortening product delivery terms. However, there are numbers of less obvious yet important advantages of outsourcing engineering services.

Businesses that maintain large amounts of information are in a continuous search for new and more efficient methods of data management. This is exactly where data virtualization software comes in handy. So what is this innovative technology that a lot of people are talking about and how does it help us manage data? Let’s find out.


Definition of Data Virtualization

Advantages of Data Virtualization

Data Virtualization: Top Vendors

How to Create a Data Virtualization Tool

Challenges in Data Virtualization Software Development


With the constantly increasing volume of information, data delivery has become a problem. This problem can be solved by data virtualization solutions. Surveys by data virtualization from Denodo show that only 11% of companies used data virtualization in 2011 but that this rate increased to 25% in 2015. So what is the reason for this growing use of data virtualization? In this article, we’ll cover the main aspects of data virtualization technology and the causes of its growth.

Definition of Data Virtualization

What is data virtualization? It’s a process of data management including querying, retrieving, modifying, and manipulating information in other ways without needing to know technical details such as source or format. Data virtualization uses virtualization technology to abstract data from its existing storage (a data silo) and presentation and provide a holistic view of that data regardless of the source.

The key features of data virtualization are:

  • Data abstraction
  • Data federation (combining multiple datasets into one)
  • Semantic integration (integrating data structures without losing meaning)
  • Data services
  • Data unification and security.

Data virtualization provides a view of requested data in a local database or web service, and its aim is to process large amounts of data. Data virtualization software usually supports nearly any type of data including XML, flat files, SQL, Web services, MDX, unstructured data in NoSQL, and Hadoop databases.

How Data Virtualization Works

How does data virtualization work? When a user submits a query, data virtualization software determines the optimal way to retrieve the requested data, taking into account its location. Then the data virtualization software takes the requested data, performs transformations, and returns it to the user. It’s worth mentioning that these tools don’t overload users with information such as the absolute path to the requested data or actions applied to retrieve it.

Related services

Cloud Computing & Virtualization Development

Advantages of Data Virtualization

Data virtualization is an effective solution, especially for organizations that require a tool to rapidly manipulate data and have a limited budget for third-party consultancy and infrastructure development. Thanks to data virtualization, companies can have simplified and standardized access to data that’s retrieved from its original source in real-time.

Furthermore, the original data sources remain secure since they’re accessed solely through integrated data views. Data virtualization can be used to manage corporate resources in order to increase operational efficiency and response times.

Benefits of data virtualization for companies include:

  • Faster access to data for decision-makers.
  • Increased operational efficiency due to fast formation of data stores.
  • Lower spending on data search and structuring solutions.
  • Advanced analytics due to powerful data compilation.
  • Reduced security risks with additional levels of access and permission management that separate original data silos from the user context.

The data virtualization market is constantly growing. Companies that use data virtualization technologies see the benefits in cost savings on data integration processes that allow them to connect shared data assets. Gartner predicts that 35% of enterprises worldwide will use this technology for their data integration processes by 2020. Let’s discuss the reasons for this increasing adoption.

Reduced Infrastructure Workload

Traditional data centers require focused data management, a stable network, and many system resources. All these components form a heavy system load and increase corporate expenses. Data virtualization allows companies to implement a simpler architecture in comparison with standard data warehouses. This approach leads to less data replication and, as a result, a smaller infrastructure workload.

Increased Speed of Data Access

Data virtualization is a more effective alternative for data federation that requires the use of extraction, transformation, and loading (ETL) tools. For data federation, creating physical data centers is quite time-consuming and can take up to several months. ETL tools use metadata extracted from original data sources and allow changes to quickly be made to data. Therefore, ETL tools ensure fast data aggregation and structuring.

Data Unification

Data virtualization unifies data by abstracting it from its location or structure. No matter where data is stored (in the cloud or on-site) and no matter if it’s structured or unstructured, you can retrieve it in one unified form. This increases the possibilities for further data processing and analysis.

Simplified Discovery

Data virtualization allows both applications and users to find, read, and query data using metadata. Metadata-based querying significantly speeds up data search through virtual data services and allows you to retrieve requested information much faster than with a traditional semantic matching approach.

Simplified Collaboration

Data unification leads to another significant advantage that lies in how technology companies can ensure efficient data sharing. With growing amounts of data, it becomes difficult to process data of different formats and from different sources. Data virtualization allows applications to access any dataset regardless its format or location.

Read also:
Cloud computing vs Virtualization

Data Virtualization: Top Vendors

In the first quarter of 2015, Forrester listed the nine biggest data virtualization vendors worldwide. Furthermore, the research agency evaluated them according to 60 different criteria including strategy, current offerings, and market presence.

Forrester’s list of the top enterprise data virtualization vendors for Q1 2015 includes:

  • Oracle
  • Cisco Systems
  • Microsoft
  • SAS Institute
  • IBM
  • Denodo Technologies
  • Red Hat
  • Informatica
  • SAP

Forrester stated in 2015 that data virtualization vendors had significantly increased their cloud capabilities, scalability, and cybersecurity since the agency’s previous evaluation in 2012.

In its 2017 Market Guide for Data Virtualization, Gartner listed 22 data virtualization vendors. These vendors’ solutions offer diverse capabilities, although all of them support data virtualization technology.

  • Capsenta
  • Cisco
  • Data Virtuality (the University of Leipzig, Germany)
  • Denodo
  • IBM
  • Informatica
  • Information Builders
  • K2View
  • OpenLink Software
  • Oracle
  • Progress
  • Red Hat
  • Rocket Software
  • SAP
  • SAS
  • Stone Bond Technologies
  • Attunity
  • Cirro
  • Microsoft
  • Palantir
  • Talend
  • VirtDB

Let’s look at some representative data virtualization solutions and their general characteristics.

Existing Data Virtualization Tools

The data virtualization market is occupied by large software vendors such as Informatica, IBM, and Microsoft, as well as specialized vendors such as Denodo. The tools provided by large vendors cover nearly all possible tasks related to data virtualization. The software offered by smaller companies is mostly focused on advanced automation and improved integration of data sources.

Red Hat

JBoss Data Virtualization is a tool created by Red Hat. This solution is aimed at providing real-time access to data extracted from different sources, creating reusable data models and making them available for customers upon request. Red Hat’s solution is cluster-aware and provides numerous data security features such as SSL encryption and role-based access control.


The Denodo data virtualization platform offers improved dynamic query optimization and provides services that handle data in various formats. It supports advanced caching and enhanced data processing techniques. The platform also ensures a high level of security by providing features such as pass-through authentication and granular data masking.


Despite Gartner not including Delphix in its market guide, we’ve still decided to briefly cover this solution and note its main differences from the top vendors. In 2015, the Delphix startup raised $75 million in its last funding round to further improve its tool. The Delphix data virtualization solution captures data from corporate applications, masks sensitive information to ensure cybersecurity compliance, manages user access, and generates data copies for users. Its specialty is creating 30-day backups that don't exceed the size of the files on disk.

How To Create a Data Virtualization Tool

Data Virtualization allows users to get a virtual view of data and access it in numerous formats with business intelligence (BI) tools or other applications. This is just a tiny part of what data virtualization solutions should be able to do, however. In this section, we’ll discuss what aspects technology vendors should consider before building data virtualization solutions.

Necessary Features for Data Virtualization Solutions

Abstracting data from sources and publishing it to multiple data consumers in real-time allows businesses to collaborate and function iteratively, thereby considerably reducing turnaround time for data requests. However, a good data virtualization solution has to provide users with more capabilities than this. Let’s consider the most important ones.

Connectivity with Various Data Sources

Any data virtualization software contains a connectivity layer. This layer allows the solution to extract data across resources. The more data types, database management systems, and file systems your solution supports, the more useful it will be.

Components that ensure access to various data sources include:

  • Adapters and connectors
  • Cloud data stores
  • Database infrastructures (such as Hadoop and NoSQL)
  • Mainframes
  • Data warehouses and data marts
  • Applications (BI tools, CRM, SaaS, and ERP). 

You should implement various adapters in your software. For this purpose, you can create your own or license existing components.

Semantic Data Analysis

The most effective tools use a single interface and look at metadata to provide users with data they request. Your solution should contain analytics systems in order to save your customers time on structuring and analyzing large amounts of information.

Efficient Data Provisioning

Safe data provisioning is a significant part of ensuring cybersecurity. Data provisioning is the process of making data available to users and applications. Data security includes user authentication and enforcing group and user privileges. Your solution should provide role-based and schema-level security so you can wisely manage access to data for geographically distributed users and data sources. Reliable data provisioning will protect your data from uncontrolled access and eliminate risks related to intellectual property or confidential data.

Read also:
Virtualization in Software Testing: Advantages and Disadvantages

Challenges in Data Virtualization Software Development

Although data virtualization offers numerous benefits, it comes with challenges too. According to a survey by Denodo, 46% of organizations that have implemented data virtualization solutions see their biggest challenge as adapting the software for departments besides IT. Of companies surveyed, 43% face particular issues with managing software performance. So what challenges can technology vendors face when they decide to build their own data virtualization solution?

Ensuring Adequate Speed and Responsiveness of the System

Business owners can have varying dynamic data needs, and you should take this into account. Fortunately, data virtualization is flexible enough to deliver data in multiple modes depending on how it has to be represented. For example, pricing analysts may need real-time sales and turnover information on some holidays when a one-day delay is not acceptable. Highly-optimized semantic tier processes will make your software more effective. Query caching, distributing processes and using memory grids and processing grids will help ensure faster data delivery.

Efficient Management of Shared Resources

Whether data is internal or external to your organization, stored in the cloud, in a big data source, or on a social media platform, your data virtualization solution should be able to access it, structure it, and make it conform to existing patterns so it’s easy to use. When a company uses shared data resources, it’s quite a challenging task to create a solution that can effectively manage them. That’s why you should implement data governance capabilities to ensure efficient data analysis and error tracking, especially when data is being pulled from a variety of sources.

Providing Tools for Migration from Legacy Systems

Data virtualization typically plays an instrumental role as an abstraction layer between old and new systems during migration of legacy systems. Therefore, your solution should contain tools for migrating from legacy systems. Users should be able to employ data virtualization for prototyping and integrate both kinds of systems when working with the parallel run architecture.


Data virtualization software development is time-consuming and requires deep expertise to create effective data virtualization tools. Professionalism, qualifications, and long-term experience in general software development are necessary skills for creating enterprise-level solutions.

Furthermore, a deep knowledge and understanding of the needs of technology enterprises will allow you to build a useful tool to help organizations process data.

Data virtualization and cloud computing are among our specialties at Apriorit. We’ve helped various technology vendors develop advanced data processing solutions. Send us your request for proposal and we’ll get back to you and discuss what we can offer for your project.

As the 2016 year begins, we can read a series of traditionally published recent trend analysis and predictions made by industry experts after watching the IT sector, analyzing statistics, and conducting surveys. Global IT outsourcing trend analysis is represented by CIO magazine, KPMG Shared Service & Outsourcing Institute, Gartner and others. Let’s try to analyze what said findings mean for the software R&D service providers and what specific software development outsourcing trends we can mark out.

The wide popularity of agile methodology in current software development is hard to overestimate. Advent of agile techniques allowed to save costs and greatly shortened time to market for many companies. However, one of the basic principle of agile methodology is importance of face-to-face communications, which doesn’t jell well with teams where members are geographically dispersed. Management of agile distributed teams is always a struggle, but reality of the situation is that most companies employ a distributed team in one form or the other, either through the use of outsourcing, or simply by the virtue of some people working from home or from different city.

The white paper describes the technology of code protection for Linux applications, which is based on the so-called “Nanomite” approach applied previously for Windows systems.

It is one of the modern antidebugging methods that can be also effectively applied for the process antidumping.

Apriorit Code Protection for Linux is provided as commercial SDK with various types of licensing.


Project Description

The project was written for Linux OS 32-bit applications. But the principles can easily be implemented for other operating systems, so further development is planned.

First, we will take a look at creating a custom debugger for Linux. After that, we will move on to the implementation of nanomites. Binutils and Perl are used for the compilation of the project.

We apply the combination of two techniques: Nanomites and Debug Blocker.

Nanomites are code segments, containing some key application logic, marked with specific markers in source files. Protector cuts such segments out from the protected program for packing. When unpacking, they are obfuscated, written to the allocated memory, and jumps replace them in the original code. The table of conditional and unconditional jumps is built, and it contains not only nanomite jumps abut also some non-existent "trash" ones. Such "completness" is a serious obstruction to recover this table.

Debug Blocker implements parent process protection. Protected program is started as a child process, and protector - parent process - attaches to it for debug. Thus, for a third party, it is possible to debug only parent process. Combined with nanomite technology, Debug Blocker creates reliable protection for an application, making its debugging and reversing very complicated and time-consuming.

Read more about Nanomite Technology in our white paper Nanomite and Debug Blocker Technologies: Scheme, Pros, and Cons

Both techniques were successfully used in commercial Windows protectors. Apriorit Code Protection is the first product to implement them for Linux application protection.

General Idea

Apriorit Code Protection Scheme

Apriorit Code Protection includes 2 main components:

  1. Nanomites: a static library that contains the debugger process logic.
  2. Nanomites Debugger: a debugger executable file, it is compiled with the Nanomites library.

Also we provide Nanomites Demo: a demo application protected by nanomites.

There’s also a script collection for adding the nanomites to an application and for creating nanomites tables.

Protected Application Creation Sequence

An application with an –S key for creating an assembler listing is created;

The assembler listing is analyzed with Perl script. All jump and call instructions (e.g., jmp, jz, jne, call, etc.) are processed and replaced with instructionOffsetLabel(N): int 3;

After that, the user application, which consists of modified assembler listings, is compiled.

With the help of a Perl script, a compiled application is parsed and the table of nanomites is built.

Debugger Library Description

Our debugger is based on the ptrace (process trace) system call, which exists in some Unix-like systems (including Linux, FreeBSD, Mac OS X). It allows tracing or debugging the selected process. We can say that ptrace provides the full control over a process: we may change the application execution flow, display and change values in memory or registry states. It should be mentioned that it provides us no additional permissions: possible actions are limited by the permissions of a started process. Moreover, when a program with setuid bit is traced, this bit doesn’t work as the privileges are not escalated.

After the demo application is processed with scripts, it is not independent anymore, and if it is started without a debugger, the «segmentation fault» appears at once. The debugger starts the demo application from now on. For this purpose, a child process is created in the debugger, and then parent process attaches to it. All debugging events from the child process are processed in a cycle. It includes all jump events; parent process analyzes nanomite table and flag table to perform correct action.

The Advantages of Apriorit Solution Compared to Armadillo

Armadillo (also known as SoftwarePassport) is a commercial protector developed for Windows application protection. It introduced nanomite approach, and also uses Debug Blocker technology (protection by parent process).

In Armadillo, the binary code is modified. That’s why when a 2-5 bytes long jump instruction is replaced with a shorter 1 byte long int 3 (0xcc) instruction, some free space remains. Correspondingly, we need to write the original jump instruction over int 3 to restore a nanomite.

We change the code on the sources level in our approach. That’s why the nanomite will be 1 byte long. Correspondingly, we won’t be able to restore the nanomite by writing the original instruction over it. And we cannot extend the code in the place of the nanomite as all relative jumps would be broken. But there is a way to restore our nanomites, for example the following.

A Way to Recover Apriorit Nanomites

A hacker can create an additional section in the executable file, then find the nanomite and obtain its jump instruction and jump address.

Then the restoration goes as follows:

Nanomite Recover

Such solution is complex in implementation. Firstly, a disassembler engine is required for automation, secondly, the moved instructions may contain jump instructions with relative jumps, which will require corrections.

Learn more about Linux Anti-debugging SDK!

Tell us about your project
Send us a request for proposal! We’ll get back to you with details and estimations.

By clicking Send you give consent to processing your data

Book an Exploratory Call

Do not have any specific task for us in mind but our skills seem interesting?

Get a quick Apriorit intro to better understand our team capabilities.

Contact Us

  • +1 202-780-9339
  • [email protected]
  • 3524 Silverside Road Suite 35B Wilmington, DE 19810-4929 United States
  • D-U-N-S number: 117063762