The importance of network traffic analysis is constantly increasing because of novel network technologies being developed immediately hitting the market, thus increasing data volume (including personal and sensitive information) transmitted over network by innumerable network applications many of which implement closed application level protocols. Available network analysis tools typically don’t offer generic facilities to inspect application protocols, usually only widespread protocols are supported.
Reuse of code fragments, often using in software development. At the level of the source code, it can be part of a program that performs a similar role, but copied with slight modifications. On the binary level it may be object files from libraries are included on the linking stage in the several executable files of the program.
Many hardware-based techniques have been developed for support of increasing data flows: high-speed network channels and memory buses, high frequency CPUs, hard disks with high data density and low access time. However, numerous unsolved problems remain on the software side dealing with processing, analyzing and storing data. This software must use hardware resources efficiently and also satisfy rigid requirements: support batch processing of huge data volumes with high throughput, provide reliable functioning on unreliable hardware, allow for good scaling and efficient random data access. This project is aimed at creating a framework for data acquirement, filtering, analysis and storage in real time on high-speed network channels. This framework will allow automation of a wide range of tasks related to high-speed data flows: classifying traffic, ensuring network security, analyzing social networks, and forecasting using big data.
The project goal is to create methods for solving program understanding problems that arise during the program lifecycle. The basic information for such methods is program structure, that is, program entities, relations between them, and their metrics. The methods will be used in the task of easing the back/forward porting of code changes between different versions of the given program.
The idea of the project is to build a solution for processing Big Data collected from numerical simulation of continuum mechanics problems.
The main aim of the project is to create software tools that allow more efficient use of computing resources in the cloud. The results are applied in the system UniHUB for hosting applications on virtual machines running OpenStack.
To protect the binary code from analysis are used by many different methods, one of them - obfuscation transformations. Such transformations are usually made with automatic obfuscators, which takes as input the source code or binary file, and output provide an obfuscated executable program.
The project goal is to create system toolchain software that improves programmer's productivity on distributed heterogeneous systems (typically with nodes having a couple of multicore CPUs and accelerator(s) like GPUs). We will be researching on tools for finding program bottlenecks, critical errors (including multithreaded performance), and trying new programming standards. We will also be improving problem specific parallel algorithms in the sparse matrix libraries and OpenFOAM framework for CFD problems.
SharpChecker is the platform for static analysis of C# programs, aimed at finding bugs. The tool contains both a code analyzer engine as well as components ready for integration into industrial development processes. SharpChecker can be used not only by programmers to fix errors in the project, but also by managers as another dynamic metric to evaluate the quality of the product.
ISP Obfuscator is based on long-term research that ISP RAS started as early as 2002. The obfuscation technology grew up from basic research to industrial deployment. It is covered by dozens of publications and two PhD theses during these years. ISP Obfuscator integrates with compilers to make those transformations transparent for developers. At the moment two compiler infrastructures are supported: LLVM and GCC.
ISP RAS has developed Svace static analysis tool that satisfies all requirements for a production quality analyzer. Svace supports C/C++, Java, and C# programming languages (C# can be also shipped separately as it is implemented as a standalone tool), and it runs on Linux and Windows. Svace analyzes programs that can be built on Intel x86/x86-64 Linux/Windows, ARM/ARM64 architectures. Popular C/C++ compilers for Linux and Windows are supported as well as a range of compilers for embedded systems.
QEMU is a full-system multi-target open source emulator. It is widely used for software cross-development. Many large companies (e.g., Google, Samsung, Oracle) prototype and emulate their hardware platforms and peripheral devices on QEMU. QEMU 2.9 emulates 20 different hardware platform families, including x86, PowerPC, Sparc, MIPS, ARM.
Nowadays, the task of network traffic analysis is of increasing relevance: the reasons are improvement and deployment of new network technologies (VoIP, P2P, streaming video) and emergence of numerous application level protocols used by new network applications. Offline or online analysis is employed, depending on particular analysis system and the problem being solved.
ISP Crusher is a toolset that combines various dynamic analysis approaches. It includes ISP Fuzzer, a fuzzing tool, and Sydr, an automatic test generation tool for complex pro-grams. Two other ISP RAS analyzers, BinSide and Casr, will be included in Crusher within the next two years. Crusher allows organizing a development process that is fully compliant with GOST R 56939-2016 and other regulatory requirements of FSTEC of Russia.
Casr creates automatic reports for crashes happened during program testing or deployment. The tool works by analyz-ing Linux coredump files. The resulting reports contain the crash’s severity and additional data that is helpful for pin-pointing the error cause.
Software developers often face a problem of incorporating complex computations, data encryption and compression algorithms, and similar common notions into their code. This is typically done by using standard libraries specializing in a group of tasks; these libraries are often distributed in binary code only. On the other hand, software maintenance is gradually becoming more and more important within the development cycle; software maintenance incorporates the task of updating both its code and external libraries. External libraries and auxiliary programs, distributed in binary form, need to conform to quality and security standards.
The idea of the project was to create a technological advance in area of direct computation modeling of turbulence and large eddy method as well as to find ways for effective supercomputer usage in industrial applications. A software implementing algorithms for computation modeling of gas- and hydrodynamics numerical simulation in industrial applications based on OpenFOAM free software package was developed under the project. On the base of this software a method of using the supercomputer for numerical modeling of gas- and hydrodynamics problems in industrial applications was developed.
Most of developed tools for analysis for various libraries (MPI, OpenMP) and languages for parallel programming use low level approaches to analyze the performance of parallel applications. There are a lot of profiling tools and trace visualizers which produce tables, graphs with various statistics of executed program. In most cases developer has to manually look for bottlenecks and opportunities for performance improvement in the produced statistics and graphs. The amount of information developer has to handle manually, increase dramatically with number of cores, number of processes and size of problem in application. Therefore new methods of performance analysis fully or partially handling output information will be more beneficial.
The idea of the project is to create a technological advance for development of effective method of unsteady near field turbulent flows simulation with accuracy required by engineering applications and a technological advance in area of software development for calculation of near field turbulent flows acoustic fields on hybrid architecture supercomputers.
"Virtual supercomputer" software was developed in this project. The software complex is developed in free software model and is based on open source code components.
The project is aimed at development of a software toolset for automated vulnerability detection and exploit construction. The toolset is designed to reveal vulnerabilities in binary code of programs that operate over network.
One of the widespread problems in binary code analysis is recovery of structure of incoming network packets or files read by a program. In case of protected binary code the difficulty of manual format recovery becomes inadmissibly high. This project proposes to create an automated format recovery system which does not require specific knowledge about the target system software from its user. This system will increase work efficiency and recovery accuracy.
A program model for distributed heterogeneous computation systems, with a single node consisting of a multicore general purpose computer (host-machine) and one or several PLD. Proposed model for programming heterogeneous systems combines best approaches for creating high-level programming models and approaches utilizing accelerators capabilities with the help of runtime libraries with maximum efficiency. At the high level a programmer can describe a data-parallel algorithm, which can be parameterized for certain heterogeneous node.
A prototype of web-center for program analysis was developed under the project on the base of the UniHUB technological platform software components, developed in the ISP RAS, the "University cluster" program computation infrastructure and Avalanche open program analysis package.
Research and development of a basis for the computation platform and application programming interface (API) for automated numerical simulation of large scale aerodynamic and hydrodynamic problems on petaflops supercomputers.Start of project – 2011. End of project - 2012. Customer - The Ministry of Education and Science.
The project was aimed at the creation of an experimental platform for numerical simulation on the top of the OpenFOAM library for heterogeneous computer systems with graphical processing units transferring the most resource-intensive computations to the graphical processing unit using CUDA technology and managing central processing unit and graphical processing unit interaction.
One has to harness dynamic and adaptive recompilation methods when designing the system for general-purpose languages compilation which takes into account the specific factors of target hardware and the most likely way of usage. It is favorable to research those methods in the LLVM infrastructure environment.
During works on the project, problems on research of access methods to high-performance resources and on development of an experimental sample of hardware-software platform, providing access to high-performance resources as Web-services were solved.