How to conduct database computing performance testing?
Newly emerging database products often boast of their “fastest” and “faster” computing speeds, but they often fail to achieve satisfactory performance in practical applications. Therefore, conducting operational performance testing in advance is the key to successful database selection. Many users are not familiar with performance testing, which can lead to some misunderstandings. Here are some key considerations and suggestions:
1. We cannot be lazy and take blind trust in international standards.
There are international standards for database operation testing in the industry. For example, the TPCH, in which the test cases are carefully designed, simple, and have a certain coverage (including grouping, association, and sub queries), which indeed have reference value and authority. Many users do not want to design their own test cases and simply conduct the tests using international standards. However, most of these standards aim to test SQL and only contain operations that are not overly complex when written in SQL. The complexity of actual business generally far exceeds these tests, and there are many computational scenarios that are difficult or even impossible to implement in SQL. For example, the ubiquitous ordered computing and complex process computing can often only be implemented using stored procedures or external programming.
That is to say, the content of international standard testing is usually relatively basic. If even international standard testing cannot run well, it is definitely not a good database (in terms of computational performance). Being able to run international standards well does not necessarily mean that the performance of this database is good. International standards are only a necessary condition, and users should not take blind trust in it. On the basis of these international standards, it is necessary to search for and extract more testing cases from one’s own actual scenarios. The conclusion obtained by testing these two types of cases together can better withstand the test of practical applications.
2. The testing cases should be designed around the application objective, reflecting the key features of the operation.
The testing cases cannot replicate all the operations in the project, nor can they simplify or lose the key features of the operations. They must be designed around the application goals of the database. If the database is to be used for offline batch computing, the focus of the testing is on the step-by-step computing ability of stored procedures, as well as the performance of intermediate results landing. At this point, concurrent computing performance is not very important. If the database is to be used for multidimensional analysis and calculation, it is often necessary to use a wide table, so the test table should have enough fields, and also consider multi-layer dimensional tables and concurrent access performance. If it is used for report backend calculation, complex calculations such as multi table association should be considered, and concurrency ability is also important. For fixed-point search computation, which involves identifying a small number of targets from a large amount of data, it is necessary to focus on testing indexing and high concurrency performance.
Some databases also need to balance multiple objectives and are comprehensive for several tasks, then multiple testing priorities need to be considered. In fact, a database cannot have advantages in all aspects, and it is necessary to clarify the focus of database advantages through various tests.
3. Whether the peripheral preparation time is included in the calculation time depends on the scenario.
Some databases require certain preparation and peripheral work, including data importing and exporting, pre aggregation and redundant data of multidimensional analysis, establishing and updating of indexes of fixed-point search, etc. The preparation time is generally relatively long. If it is an online query task such as multidimensional analysis, report backend, and fixed-point search, database operations will be repeated many times. A longer preparation time does not affect query performance and may not be included in the calculation time. Offline batch running tasks are only executed once, so the preparation time that needs to be completed before each operation must also be added to the task execution time.
There are various forms of peripheral preparation calculations, and some technologies (such as SPL) require certain preparation work (orderly storage of data, etc.), which seems to increase the computational workload. However, SPL does not require data importing and exporting, and should be considered uniformly during testing.
In addition, whether some peripheral preparation tasks are allowed to be executed also depends on the actual project requirements. For example, preloading into memory can improve computational performance, but if memory is tight in the actual environment, it is necessary to consider disabling this peripheral task.
4. Pay attention to the matching of hardware environment and testing tasks.
The hardware configuration of the testing environment should be designed uniformly in the testing cases, otherwise the testing results may also have significant deviations.
During testing, if the memory is larger than the total amount of data, all data will be loaded into memory for calculation, making it impossible to test the database’s ability to perform external storage operations.
Even if the total amount of data is greater than memory capacity, it still depends on whether the data is in columnar storage. If the data is in columnar storage and only few columns participate in the calculation, all data will still be loaded into memory. In this case, it is necessary to have enough columns participate in the calculation in order to measure the computing capability of the database’s external storage.
Using virtual machines to limit memory capacity is a convenient and effective way to carry out testing with a relatively small amount of data, which can reduce testing workload.
In actual business, a server may have to run multiple tasks, and the calculations of the testing case often cannot monopolize all hardware resources. Therefore, the hardware resources of the testing server can only be equivalent to the idle part of the actual server.
Some users’ actual environment is already a private cloud, so the test should consider using common cloud host (virtual machine) specifications to facilitate users in building the test environment.
Some databases have adaptability to storage and may run well on solid-state drives (SSDs), but not on ordinary mechanical hard drives. Users should consider whether their actual environment is suitable or whether they can adapt to the target database.
5. Single machine testing is preferred.
The distributed computing metrics that many database manufacturers claim look good, so users often consider building a cluster environment to test the computing capability of the database from the beginning. In fact, in most cases, single machine testing can reveal performance issues of computation. Therefore it is recommended to conduct single machine testing first and then consider cluster testing, which can greatly reduce the cost of testing.
A database that runs fast on a single machine may not necessarily run fast on a cluster. While a database that cannot run fast on a single machine definitely cannot run fast on a cluster, which is a necessary condition similar to international standards. By testing on a single machine first, some unsuitable alternative products can be eliminated at a lower cost.
6. The test report released by the manufacturer should be carefully read.
Some manufacturers only release reports that are N times faster, without mentioning the scene or environment. This belongs to playing hooligans and has little reference value in reality. The key to a test report is to be reproducible, and it should be able to be repeated even by another person. TPCH is reproducible, which is why it has a certain level of authority.
Therefore, when reading test reports made by others, it is important to pay attention to whether test data is provided or whether a program for creating data is provided; Has the calculation code been listed and the hardware environment been explained clearly. With these, anyone can reproduce this test.
The performance test reports released by SPL below has clear application objectives and key operational features for each set of testing items, providing data scale, hardware configuration, test data (or program generating data), operational code, test results, etc., which can be fully reproduced. Moreover, the hardware configurations in these tests serve the testing tasks and can effectively test external storage computing ability, multi table association ability, time ordered computing ability, and so on. They can serve as important references for users to design their own testing items.
SPL computing performance test series: TPCH
SPL computing performance test series: associate tables and wide table
SPL computing performance test series: multi-index aggregating
SPL computing performance test series: position association
SPL computing performance test series: funnel analysis
SPL computing performance test series: in-group accumulation
SPL Official Website 👉 https://www.scudata.com
SPL Feedback and Help 👉 https://www.reddit.com/r/esProc_SPL
SPL Learning Material 👉 https://c.scudata.com
SPL Source Code and Package 👉 https://github.com/SPLWare/esProc
Discord 👉 https://discord.gg/cFTcUNs7
Youtube 👉 https://www.youtube.com/@esProc_SPL
Chinese version