-------------------------
Affected Roles: System Owner • System Administrator
Related DW Spectrum VMS Apps: DW Spectrum Server
Difficulty: Medium
-------------------------
Overview
DW Spectrum 4.0 is introducing a new video analytics feature called object detection. Plugins for version 3.2 are able to inject into DW Spectrum Meta VMS analytics events with some basic attributes: date, time, and text tags. Version 4.0 also allows plugins to provide data about objects that were detected on the picture. Objects in are primarily a sequence of rectangle coordinates and a set of custom tags. The object database is searchable by area of interest (as seen with motion detection), types and tags as provided by plugin, date/time and camera where appeared.
Object metadata is stored partially in SQL-based database (file object_detection.sqlite in the storage root catalogue), the rest of the metadata is stored in a proprietary database (files in subdirectory archive/metadata/<camera_mac>/YYYY/MM/analytics_detailed_*.bin).
Users can choose which storage should be used for metadata by going to Server Settings -> Storage Management tab and clicking on “Use to store analytics data”.
Since VMS runs in a certain network and hardware environment, there are capacity considerations regarding object storage and access. Let us consider several common usage scenarios. To gather figures below we had to run the VMS server with stub plugin in a virtualized environment. The specification of the environment is listed here.
Host CPU: Intel Core i7-6800K, 3.4 GHz, 6 cores, 15M cache, VT-x and VT-d enabled
Host chipset: Intel C610
Host memory: 32 Gb, DRAM Freq 1066.5 Mhz.
Host HDD: WDC WD40EFRX-68N32N0
Host OS: Windows 10 Home 10.0.18362 N/A Build 18362
Guest CPU: 6 cores
Guest memory: 4 Gb
Guest OS: Ubuntu 18.04 LTS.
VMS version: 4.0.0
How do I create an object?
Object creation is initiated by analytics plugin and may be preceded by intense CPU usage if plugin implemented so. In case of stub plugin there is almost no CPU overhead, but real systems have to be planned with respect to plugin CPU and memory load though these issues not directly related to VMS itself.
Object creation implies write operations, the amount of data written depends on the duration of the object appears on the camera stream: the longer the object the more data it stores. For objects with an average duration of 3.3 seconds it takes about 26 kBytes to store metadata on the drive. It gives 7 Kbits/s stream overhead, what is insignificant in relation to video stream itself.
This scenario is more complicated since it may cause read requests throughout all of the System's metadata.
Searching over 2000 objects that belong to one camera within a 1 hour time frame causes VMS server to read 3MB of data from the analytics storage. Database indices used usually allow for better performance, but for simplicity let us assume I/O amount grows linearly with time frame span, average object intensity, and camera count. For example, if we store an object database on a general-purpose HDD with random-access read speed of 60 MB/s and have a search request latency of 1500 ms, it will gives us a maximum search time frame of 30 hours.
CPU performance does not affect this scenario significantly. In contrast, RAM availability may give a performance boost for searches with partly repeated criteria as the OS caches data that was read recently. This usage scenario may also gain if SSD drives are used to store an object metadata. Due to high latency and usually unstable throughput remote drives (CIFS or NFS) do not perform well enough in this usage scenario.
Searching the object database in a system of merged servers
If a camera was moved from one server in a system to another, metadata is stored more than on one server. In that case the server to which client is connected interrogates all other servers for metadata that fits the filter.
Performance evaluation is extremely complex problem here since it depends on how many servers are involved, how scattered the metadata is, what is network performance between servers. As in the previous scenario, the most probable bottleneck is metadata storage I/O throughput on random read. Also if a remote server replies with about tens of thousands of objects and communication is performed over congested connection, there may be network issues as well.
Conclusions
Ensure you have enough CPU capacity to run object detection.
Search through longer time frames, more cameras, or cameras with more intensive object stream requires more storage throughput. Consider usage of SSD for analytics storage. Avoid usage of remote drives for analytics storage.
Ensure you have stable network connectivity between servers to prevent false failover triggers and enhance throughput and responsiveness.