SoleraTec's Phoenix EXP is the ultimate data management tool, providing automated digital asset management for Windows, Mac, and Linux environments. The Data Services Service is the engine that manages all the data within your Information Repository.
From data ingest all the way to storage, Phoenix EXP stands out among other data management systems because your data assets are under complete control and quality is preserved. The Data Services Service delivers the peace of mind that data and video from yesterday or five years ago that is stored in the Information Repository can be searched and exported to any desktop on the network at any time.
Through creating a Data Service Policy, the Data Services Service performs duties such as migrating, replicating, and purging data between Vaults, and the overall management of data, video, and supports file movement to other Vaults on the network.
The migration, replication, and purging of data is key to Phoenix EXP because those functions allow for a very scalable solution. Many types of storage devices may be used with Phoenix EXP and can be added as you need them. This proves to be cost effective and even more scalable for long-term retention with this multi-tiered architecture.
Since the Data Services Service manages the data between Vaults within the Information Repository, it is the Locator Service that tracks which Vaults on the network are active.
The product architecture proves advantageous with the following:
|•||Load Balancing - You can mark a specific storage pool as part of the destination criteria for jobs. The clients locate the Vault(s) that contain the designated storage pool and stores the defined files to it. If there are multiple Vaults with media belonging to the storage pool defined for the initial job, clients pick the one with the most available free space on it. This keeps the entire Information Repository load balanced.|
|•||Complete and Accurate Stored Files - As a job is running, the completeness and accuracy of the stored files are constantly being verified. Most other products that perform data verification read data from the media piece and compare it to the original data on the client computer. This method is resource intensive and time consuming. Phoenix EXP uses a cyclic redundancy check (CRC) as the files are written to storage media.|
If an error occurs while running jobs that prevents files from being processed, clients add the files to an internal retry list. At the end of the job, the client retries to process the files that failed on the first pass. If it fails during the retry session, the failure is logged to a retry file and report and is included in the Information Repository's error logs. When a subsequent job starts, the system looks for the retry file, and the files slated for retry are added with first priority to the list of files to be captured.
Workflow - Data Service Policy
An intricate part of data management is working with Data Service Policies. The following steps supply an overview look at the data management workflow:
|1.||From your suite of product applications, open the Data Service Policy application (DataSvcPolicies.exe). For further explanation of how to launch applications using different operating systems, see Getting Started.|
|2.||If a policy does not open by default, you need to create a policy. Before you can set parameters and initialize the process, a policy must be created and opened.|
|3.||From the Data Service Policy window, select the Source tab to identify a Vault, media, or storage pool to migrate, replicate, or purge files.|
|4.||From the Data Service Policy window, select the Criteria tab to designate files to migrate, replicate, or purge. Within this tab, you can also add file and directory filters.|
|5.||From the Data Service Policy window, select the Destination tab to specify where the files are to be migrated or replicated to.|
|6.||From the Data Service Policy window, select the Scheduling tab to define job occurrences. By default, Data Service Policies are not scheduled to run.|
|7.||From the Data Service Policy window, select the Advanced tab to set options such as pre-processing and post-processing commands, log output location, connection sharing, and throttling.|
|8.||Click the Save icon to start the policy.|