When working with an outsourced project, a project manager (PM) needs to focus on dozens of issues simultaneously. They need to facilitate the project remotely, ensure efficient communication, and keep an eye on project resource use. Headhunting and keeping such an experienced and skilled employee in-house may be challenging and costly, especially for small and medium-sized businesses.
Hiring an outsourced PM as part of the development team is a great alternative to establishing in-house project management. An outsourced PM can improve team management, ensure a transparent development process, reduce your budget, and bring many more valuable benefits.
In this article, we showcase the benefits having a PM on the development side brings to a team and share our approach to project management. We also analyze common concerns about outsourcing project management and explain how we address them at Apriorit.
Outsourcing software development has become an integral part of IT projects since it provides companies with numerous benefits:
However, not everyone believes that hiring an outsourced project manager along with the development team can bring as many benefits as an in-house project manager. Usually, customers have such concerns of outsourcing project management:
How do we mitigate it? PMs at Apriorit do have more control over the development team than the customer does, which allows them to ensure that developers work to the best of their abilities. However, we always keep our customers updated on the project status and discuss critical decisions (development methodology, choice of technology, sprint goals, etc.) with them. Mostly, our PMs do this by producing daily or weekly status reports, facilitating regular online standups, and writing follow-up emails.
How do we mitigate it? At Apriorit, we have strict internal coding standards that help us build software of the highest quality. Second, during the project discovery phase, our business analysts and PMs elicit customer requirements to define the product’s goals and functionality. Thus, we ensure that the final product will meet all of the customer’s needs, expectations, and standards.
How do we mitigate it? While working on projects, our PMs and developers follow security best practices: they sign non-disclosure agreements, use secure credentials to access customers’ resources, audit the security of data forms and APIs, etc. We can perform a security audit on demand or involve an authorized third-party organization to do so to prove that we’re working to the highest security standards.
How do we mitigate it? At Apriorit, we have extensive experience delivering projects of various sizes to customers in various industries and countries. We generally assign PMs that have relevant knowledge and experience for particular projects. Also, our PMs constantly exchange experience and knowledge, which helps them be prepared for any challenge. If one PM has never worked with a particular type of business, they can always ask for advice from colleagues.
How do we mitigate it? Our PMs negotiate specific time frames and communication channels convenient for both the customer’s team and the outsourced team. Vast experience working with customers from around the world has taught us to be flexible and adapt to any conditions.
How do we mitigate it? In our experience, hiring an outsourced PM saves a project’s budget in the long run. We’ve seen our PMs successfully mitigate risks of miscommunication, unclosed requirement gaps, and insufficient risk analysis. Also, good knowledge of the team allows our PMs to efficiently manage team resources and detect issues faster than a customer’s in-house PMs usually do.
As a software R&D outsourcing company, Apriorit has extensive experience providing project managers to our customers as part of our core offer. We believe that team management on the side of the outsourcing provider is beneficial for both the provider and the customer. Now let’s find out how exactly an outsourced PM can facilitate development.
A project manager is a leader who should be able to prevent, tackle, and resolve any issues without breaking the deadlines. Moreover, as a leader, they play the role of a mentor, teacher, facilitator, and problem-solver for the team.
Our customers give us plenty of positive feedback on the performance of our PMs. Based on their reviews, we can outline these benefits of outsourcing project management as a part of the development team:
On one of our projects, a customer requested hiring more developers to quickly implement a range of additional features. Instead, the PM optimized the workload of the project team and paralleled some of the existing tasks to make sure the team would be able to handle new features without involving additional specialists. In this way, the PM saved the customer’s budget while meeting the deadlines.
To ensure that our customers feel all these advantages of outsourcing project management, our project managers have developed their own approach to handling projects. Let’s see how they do it.
Project managers at Apriorit have created a workflow that allows them to bring any project to a successful conclusion despite its size, industry, and possible challenges.
A PM’s activities start by collecting and estimating software requirements provided by the customer or elicited by a business analyst, then creating a statement of work. This document is a product specification that describes the software functionality and implementation plan that the customer and outsourcing development company have agreed upon. The statement of work is useful for both parties: the customer uses it to understand how the product will be implemented and how it will operate, and the development team uses it to plan how to develop and deliver the product within a specified timeframe.
After that, the PM chooses a development methodology for the project. This choice is based on several factors:
At Apriorit, we prefer to use an Agile methodology for our projects as it allows for more flexibility and earlier delivery compared to other methodologies.
The next step is to choose relevant tools for several areas of the project:
Although we have preferred tools for each PM activity, we always discuss them with the customer and development team to make sure everyone is comfortable with the PM’s choice.
When the most suitable methodology and tools are agreed upon, the PM creates a communication plan that defines:
This document is a must for long projects that involve at least several developers and stakeholders.
It’s also up to a PM to organize cooperation between the customer’s team and Apriorit’s team. We often work on projects with distributed teams, where our developers integrate into or augment the customer’s team. In this case, our PMs pay extra attention to ensuring a smooth flow of information and processing of tasks. To do that, they consult with team leaders, figure out the skillsets of both teams, and distribute tasks according to team members’ skills.
If the project team is distributed between several time zones, it’s up to the PM to plan communication and use the overlapping time in the most efficient manner. In our projects, we arrange meetings to discuss the most important issues that need the customer’s attention. For all other questions, we use emails and status updates that the customer can read any time they want.
Once the cooperation scheme has been discussed and agreed on, the PM creates a project plan. It includes:
When all the preparations are finished and all the documents are signed, the development team starts working. From this point, the PM’s task is to coordinate the project team and control project progress. The PM manages risks during development, identifying and troubleshooting threats before they materialize.
Another important part of the PM’s job is ensuring that the customer receives a high-quality product. In our projects, we start testing products in the early development stages and conduct user acceptance testing when we finish implementing requirements from the specification. A PM coordinates the development and testing teams to make sure they produce software according to the client’s requirements and within schedule.
At Apriorit, PMs always add great value to our dedicated development teams. More than 18 years of experience in outsourcing software development have convinced us and our customers that having a project manager on site with the development team significantly benefits a project.
Experience managing hundreds of projects in many industries and communicating with customers from all over the world helps Apriorit project managers lead any project to success.
Over the years, we’ve formulated our own approach to outsourcing project management as a part of our services. Contact us to get a well-managed and skillful development team for your next project!
Cloud computing services are on the rise and keep evolving. But it can be complicated to keep up with all the new terms along with the differences in infrastructure deployment.
Single-page applications (SPAs) have been sweeping the web development market like a hurricane over the last few years. Facebook, Google, Twitter, PayPal, Netflix, and other IT companies choose SPAs for their projects. SPAs are prized for their high performance, responsiveness, and smooth user experience. But do they really represent a new stage of web development and can they really displace traditional multi-page applications (MPAs)?
Today, large enterprises, small and mid-sized businesses, and even startups all over the globe use outsourced engineering services to bring their software development projects to life. An outside team is often a good solution for optimizing project costs, supplementing available resources, acquiring rare expertise, and shortening product delivery terms. However, there are numbers of less obvious yet important advantages of outsourcing engineering services.
Businesses that maintain large amounts of information are in a continuous search for new and more efficient methods of data management. This is exactly where data virtualization software comes in handy. So what is this innovative technology that a lot of people are talking about and how does it help us manage data? Let’s find out.
With the constantly increasing volume of information, data delivery has become a problem. This problem can be solved by data virtualization solutions. Surveys by data virtualization from Denodo show that only 11% of companies used data virtualization in 2011 but that this rate increased to 25% in 2015. So what is the reason for this growing use of data virtualization? In this article, we’ll cover the main aspects of data virtualization technology and the causes of its growth.
What is data virtualization? It’s a process of data management including querying, retrieving, modifying, and manipulating information in other ways without needing to know technical details such as source or format. Data virtualization uses virtualization technology to abstract data from its existing storage (a data silo) and presentation and provide a holistic view of that data regardless of the source.
The key features of data virtualization are:
Data virtualization provides a view of requested data in a local database or web service, and its aim is to process large amounts of data. Data virtualization software usually supports nearly any type of data including XML, flat files, SQL, Web services, MDX, unstructured data in NoSQL, and Hadoop databases.
How does data virtualization work? When a user submits a query, data virtualization software determines the optimal way to retrieve the requested data, taking into account its location. Then the data virtualization software takes the requested data, performs transformations, and returns it to the user. It’s worth mentioning that these tools don’t overload users with information such as the absolute path to the requested data or actions applied to retrieve it.
Data virtualization is an effective solution, especially for organizations that require a tool to rapidly manipulate data and have a limited budget for third-party consultancy and infrastructure development. Thanks to data virtualization, companies can have simplified and standardized access to data that’s retrieved from its original source in real-time.
Furthermore, the original data sources remain secure since they’re accessed solely through integrated data views. Data virtualization can be used to manage corporate resources in order to increase operational efficiency and response times.
Benefits of data virtualization for companies include:
The data virtualization market is constantly growing. Companies that use data virtualization technologies see the benefits in cost savings on data integration processes that allow them to connect shared data assets. Gartner predicts that 35% of enterprises worldwide will use this technology for their data integration processes by 2020. Let’s discuss the reasons for this increasing adoption.
Traditional data centers require focused data management, a stable network, and many system resources. All these components form a heavy system load and increase corporate expenses. Data virtualization allows companies to implement a simpler architecture in comparison with standard data warehouses. This approach leads to less data replication and, as a result, a smaller infrastructure workload.
Data virtualization is a more effective alternative for data federation that requires the use of extraction, transformation, and loading (ETL) tools. For data federation, creating physical data centers is quite time-consuming and can take up to several months. ETL tools use metadata extracted from original data sources and allow changes to quickly be made to data. Therefore, ETL tools ensure fast data aggregation and structuring.
Data virtualization unifies data by abstracting it from its location or structure. No matter where data is stored (in the cloud or on-site) and no matter if it’s structured or unstructured, you can retrieve it in one unified form. This increases the possibilities for further data processing and analysis.
Data virtualization allows both applications and users to find, read, and query data using metadata. Metadata-based querying significantly speeds up data search through virtual data services and allows you to retrieve requested information much faster than with a traditional semantic matching approach.
Data unification leads to another significant advantage that lies in how technology companies can ensure efficient data sharing. With growing amounts of data, it becomes difficult to process data of different formats and from different sources. Data virtualization allows applications to access any dataset regardless its format or location.
Cloud computing vs Virtualization
In the first quarter of 2015, Forrester listed the nine biggest data virtualization vendors worldwide. Furthermore, the research agency evaluated them according to 60 different criteria including strategy, current offerings, and market presence.
Forrester’s list of the top enterprise data virtualization vendors for Q1 2015 includes:
Forrester stated in 2015 that data virtualization vendors had significantly increased their cloud capabilities, scalability, and cybersecurity since the agency’s previous evaluation in 2012.
In its 2017 Market Guide for Data Virtualization, Gartner listed 22 data virtualization vendors. These vendors’ solutions offer diverse capabilities, although all of them support data virtualization technology.
Let’s look at some representative data virtualization solutions and their general characteristics.
The data virtualization market is occupied by large software vendors such as Informatica, IBM, and Microsoft, as well as specialized vendors such as Denodo. The tools provided by large vendors cover nearly all possible tasks related to data virtualization. The software offered by smaller companies is mostly focused on advanced automation and improved integration of data sources.
JBoss Data Virtualization is a tool created by Red Hat. This solution is aimed at providing real-time access to data extracted from different sources, creating reusable data models and making them available for customers upon request. Red Hat’s solution is cluster-aware and provides numerous data security features such as SSL encryption and role-based access control.
The Denodo data virtualization platform offers improved dynamic query optimization and provides services that handle data in various formats. It supports advanced caching and enhanced data processing techniques. The platform also ensures a high level of security by providing features such as pass-through authentication and granular data masking.
Despite Gartner not including Delphix in its market guide, we’ve still decided to briefly cover this solution and note its main differences from the top vendors. In 2015, the Delphix startup raised $75 million in its last funding round to further improve its tool. The Delphix data virtualization solution captures data from corporate applications, masks sensitive information to ensure cybersecurity compliance, manages user access, and generates data copies for users. Its specialty is creating 30-day backups that don't exceed the size of the files on disk.
Data Virtualization allows users to get a virtual view of data and access it in numerous formats with business intelligence (BI) tools or other applications. This is just a tiny part of what data virtualization solutions should be able to do, however. In this section, we’ll discuss what aspects technology vendors should consider before building data virtualization solutions.
Abstracting data from sources and publishing it to multiple data consumers in real-time allows businesses to collaborate and function iteratively, thereby considerably reducing turnaround time for data requests. However, a good data virtualization solution has to provide users with more capabilities than this. Let’s consider the most important ones.
Any data virtualization software contains a connectivity layer. This layer allows the solution to extract data across resources. The more data types, database management systems, and file systems your solution supports, the more useful it will be.
Components that ensure access to various data sources include:
You should implement various adapters in your software. For this purpose, you can create your own or license existing components.
The most effective tools use a single interface and look at metadata to provide users with data they request. Your solution should contain analytics systems in order to save your customers time on structuring and analyzing large amounts of information.
Safe data provisioning is a significant part of ensuring cybersecurity. Data provisioning is the process of making data available to users and applications. Data security includes user authentication and enforcing group and user privileges. Your solution should provide role-based and schema-level security so you can wisely manage access to data for geographically distributed users and data sources. Reliable data provisioning will protect your data from uncontrolled access and eliminate risks related to intellectual property or confidential data.
Although data virtualization offers numerous benefits, it comes with challenges too. According to a survey by Denodo, 46% of organizations that have implemented data virtualization solutions see their biggest challenge as adapting the software for departments besides IT. Of companies surveyed, 43% face particular issues with managing software performance. So what challenges can technology vendors face when they decide to build their own data virtualization solution?
Business owners can have varying dynamic data needs, and you should take this into account. Fortunately, data virtualization is flexible enough to deliver data in multiple modes depending on how it has to be represented. For example, pricing analysts may need real-time sales and turnover information on some holidays when a one-day delay is not acceptable. Highly-optimized semantic tier processes will make your software more effective. Query caching, distributing processes and using memory grids and processing grids will help ensure faster data delivery.
Whether data is internal or external to your organization, stored in the cloud, in a big data source, or on a social media platform, your data virtualization solution should be able to access it, structure it, and make it conform to existing patterns so it’s easy to use. When a company uses shared data resources, it’s quite a challenging task to create a solution that can effectively manage them. That’s why you should implement data governance capabilities to ensure efficient data analysis and error tracking, especially when data is being pulled from a variety of sources.
Data virtualization typically plays an instrumental role as an abstraction layer between old and new systems during migration of legacy systems. Therefore, your solution should contain tools for migrating from legacy systems. Users should be able to employ data virtualization for prototyping and integrate both kinds of systems when working with the parallel run architecture.
Data virtualization software development is time-consuming and requires deep expertise to create effective data virtualization tools. Professionalism, qualifications, and long-term experience in general software development are necessary skills for creating enterprise-level solutions.
Furthermore, a deep knowledge and understanding of the needs of technology enterprises will allow you to build a useful tool to help organizations process data.
Data virtualization and cloud computing are among our specialties at Apriorit. We’ve helped various technology vendors develop advanced data processing solutions. Send us your request for proposal and we’ll get back to you and discuss what we can offer for your project.
Cloud computing and virtualization are two main terms that people encounter when looking to optimize and modernize IT infrastructure of their organization. Both terms are often used in conjunction with one another and sometimes, erroneously, even interchangeably. In reality both virtualization and cloud computing are two very different concepts each with their own set of advantages and drawbacks, designed to tackle different challenges, although one is often used as a part of the other.
As the 2016 year begins, we can read a series of traditionally published recent trend analysis and predictions made by industry experts after watching the IT sector, analyzing statistics, and conducting surveys. Global IT outsourcing trend analysis is represented by CIO magazine, KPMG Shared Service & Outsourcing Institute, Gartner and others. Let’s try to analyze what said findings mean for the software R&D service providers and what specific software development outsourcing trends we can mark out.
The wide popularity of agile methodology in current software development is hard to overestimate. Advent of agile techniques allowed to save costs and greatly shortened time to market for many companies. However, one of the basic principle of agile methodology is importance of face-to-face communications, which doesn’t jell well with teams where members are geographically dispersed. Management of agile distributed teams is always a struggle, but reality of the situation is that most companies employ a distributed team in one form or the other, either through the use of outsourcing, or simply by the virtue of some people working from home or from different city.
The white paper describes the technology of code protection for Linux applications, which is based on the so-called “Nanomite” approach applied previously for Windows systems.
It is one of the modern antidebugging methods that can be also effectively applied for the process antidumping.
Apriorit Code Protection for Linux is provided as commercial SDK with various types of licensing.
The project was written for Linux OS 32-bit applications. But the principles can easily be implemented for other operating systems, so further development is planned.
First, we will take a look at creating a custom debugger for Linux. After that, we will move on to the implementation of nanomites. Binutils and Perl are used for the compilation of the project.
We apply the combination of two techniques: Nanomites and Debug Blocker.
Nanomites are code segments, containing some key application logic, marked with specific markers in source files. Protector cuts such segments out from the protected program for packing. When unpacking, they are obfuscated, written to the allocated memory, and jumps replace them in the original code. The table of conditional and unconditional jumps is built, and it contains not only nanomite jumps abut also some non-existent "trash" ones. Such "completness" is a serious obstruction to recover this table.
Debug Blocker implements parent process protection. Protected program is started as a child process, and protector - parent process - attaches to it for debug. Thus, for a third party, it is possible to debug only parent process. Combined with nanomite technology, Debug Blocker creates reliable protection for an application, making its debugging and reversing very complicated and time-consuming.
Read more about Nanomite Technology in our white paper Nanomite and Debug Blocker Technologies: Scheme, Pros, and Cons
Both techniques were successfully used in commercial Windows protectors. Apriorit Code Protection is the first product to implement them for Linux application protection.
Apriorit Code Protection includes 2 main components:
Also we provide Nanomites Demo: a demo application protected by nanomites.
There’s also a script collection for adding the nanomites to an application and for creating nanomites tables.
An application with an –S key for creating an assembler listing is created;
The assembler listing is analyzed with Perl script. All jump and call instructions (e.g., jmp, jz, jne, call, etc.) are processed and replaced with instructionOffsetLabel(N): int 3;
After that, the user application, which consists of modified assembler listings, is compiled.
With the help of a Perl script, a compiled application is parsed and the table of nanomites is built.
Our debugger is based on the ptrace (process trace) system call, which exists in some Unix-like systems (including Linux, FreeBSD, Mac OS X). It allows tracing or debugging the selected process. We can say that ptrace provides the full control over a process: we may change the application execution flow, display and change values in memory or registry states. It should be mentioned that it provides us no additional permissions: possible actions are limited by the permissions of a started process. Moreover, when a program with setuid bit is traced, this bit doesn’t work as the privileges are not escalated.
After the demo application is processed with scripts, it is not independent anymore, and if it is started without a debugger, the «segmentation fault» appears at once. The debugger starts the demo application from now on. For this purpose, a child process is created in the debugger, and then parent process attaches to it. All debugging events from the child process are processed in a cycle. It includes all jump events; parent process analyzes nanomite table and flag table to perform correct action.
Armadillo (also known as SoftwarePassport) is a commercial protector developed for Windows application protection. It introduced nanomite approach, and also uses Debug Blocker technology (protection by parent process).
In Armadillo, the binary code is modified. That’s why when a 2-5 bytes long jump instruction is replaced with a shorter 1 byte long int 3 (0xcc) instruction, some free space remains. Correspondingly, we need to write the original jump instruction over int 3 to restore a nanomite.
We change the code on the sources level in our approach. That’s why the nanomite will be 1 byte long. Correspondingly, we won’t be able to restore the nanomite by writing the original instruction over it. And we cannot extend the code in the place of the nanomite as all relative jumps would be broken. But there is a way to restore our nanomites, for example the following.
A hacker can create an additional section in the executable file, then find the nanomite and obtain its jump instruction and jump address.
Then the restoration goes as follows:
Such solution is complex in implementation. Firstly, a disassembler engine is required for automation, secondly, the moved instructions may contain jump instructions with relative jumps, which will require corrections.
Learn more about Linux Anti-debugging SDK!
In this white paper, we will examine one of the modern anti-debugging methods that is based on software nanomites technology. It’s also an effecient method of the process antidumping.
This approach was first introduced in the Armadillo protector for Windows applications.