Technical Overview

Previous Next  Print this Topic

To use Phoenix EXP effectively, it is useful to understand its different components and how they interoperate to create a federated Information Repository. Phoenix EXP's balanced, extended client/server design provides advantages over technologies that rely on a client/server, client, or server component to drive and manage operations.

In a typical client/server model, clients are hard-coded to specific servers and cannot be changed without some manual interaction. Our extended client/server architecture includes a Locator Service, which dynamically locates and uses any storage resource available on the network as needed. This design proves advantageous over a traditional client/server model as it includes:

Automatic fail-over - If there is more than one Vault available, clients can divide their use of the available Vaults. Clients fail-over from the current Vault in use to another Vault, should the current Vault fail.
Dynamic resource allocation - Any network computer with Phoenix EXP installed can access storage resources on the network. An Information Repository can be comprised of many storage devices, based on whatever storage media and drive technology you want. Flexible parameters are configurable to send stored data to different storage devices based on the storage performance and longevity requirements. For more information see Locator Service.
Load balancing - This is the distribution of jobs and data throughout the Information Repository. Phoenix EXP queries each Vault regarding the best match for processing requirements, storage resources, free space, and then establishes a connection to the Vault and copies files to the Vault. The network remains load balanced.


The Vault concept makes the SoleraTec product unique and powerful. A Vault can be thought of as a storage container. On the server side, a Vault is comprised of a central processing unit (CPU), Phoenix EXP software, and storage. Each Vault requires its own product key because of its scalable possibilities.

The Vault(s) on the network make up the Information Repository to manage all your data storage needs. The Vaults have attributes associated to them like licensing, media, and files, and are easy to upgrade. They are decentralized, making the Information Repository a distributed solution across your network - but overall data management remains centralized and under complete control. Phoenix EXP is truly a multi-tiered storage and digital asset management solution. For more information see Vault Admin.


Phoenix EXP includes server components and client components.

During the installation process you can opt to install the server or client but it is important to note that clients do not function unless a Phoenix EXP server has been installed on the network. For more information see Installation.


The server contains all the applications found in the client component, plus Vaults and administrative tools.

Server Vault applications are:

Vault Service - The Vault service manages the Vault(s) on the network. The Vault uses storage space in a hierarchical manner, where each level is unique and features its own access control list and other parameters. This allows for load balancing, automatic fail-over, and for media to be targeted granularly for specific uses and across multiple devices.
Data Services Service - Use to create policies for complete data lifecycle management in the Information Repository. This service provides an easy way to migrate, replicate, or purge data between Vaults and ensure that data is preserved or maintained on media within a storage hierarchy.
Target and Process Service - This service ingests data into the Information Repository and metadata is extracted from the data and cataloged in databases.
Add Remove Vault Wizard - Use to add or remove a hard disk Vault, shadow hard disk Vault, standalone data tape Vault, or a data tape library Vault.

With the server set up, you can optionally select the following server administration tools:

Vault Admin - Prepare media and manage media in the Information Repository Vaults.
Data Service Policies - Create data services policies to manage the data lifecycle within the Information Repository. The data services policies can classify, collate, and consolidate data automatically. Easily migrate, replicate, or purge data between Vaults and ensure that data is preserved or maintained on media within a storage hierarchy.
Target and Process Policies - Create policies to ingest data automatically. As data is ingested into the Information Repository, context and content metadata is extracted from the data and cataloged in databases.

Tasks the server manages are: the Locator Service, finding and connecting to available Vaults, receiving, locating, storing content and metadata to and from Vaults, and SHA256 checksum (digital fingerprinting).


The client contains all of the applications found in the server component except Vaults and administrative tools. It can run on any computer that is not running Phoenix EXP servers.

Client functionality can be run from the executables from your suite of product applications. Each application plays a role in your complete digital asset management solution. All Phoenix EXP applications are easy to use, and include many features to customize job parameters and schedules. The clients operate independently of servers and Vaults.

Client applications are:

Locator Service - The Locator is an essential component of communication to run any Phoenix EXP application, it enables the dynamic resource allocation, and it keeps the Information Repository current regardless of how large the environment is.
Network Activity - Use to monitor all data, media, and vault operations within the Information Repository in real-time.
Search and Retrieve - Locate data within the Information Repository. The files are located quickly, regardless of how many times called or to where the files have been replicated or migrated. The files can be restored to the original location or exported to any other computer on the network.
Target and Process - Ingest data into the Information Repository. As data is ingested into the Information Repository, context and content metadata is extracted from the data and cataloged in databases.

Tasks the client applications manage are: searching digital assets for capture, ingest and moving assets within the Information Repository, communicating with servers for storage resources, digital fingerprinting validations, error and activity logging, and file location and retrieval.

Administrative Control

Along with Phoenix EXP applications, a suite of command line executables are available for managing the Information Repository. When a window-based session is not available, the command lines can be used for scripting and running Phoenix EXP. Use the command line interface only if you are an advanced user of Phoenix EXP and are familiar with command line environments.

Access to any Phoenix EXP functionality can be controlled fully by the system administrator. Access to sensitive activities within the Information Repository, such as installing, opening, securing, and de-securing Vaults, requires administrator level access. This allows administrators to determine who has permissions to management functionality.

Administrators can limit access to functionality as follows:

Client access for users.
Protecting clients. By default, policies can be modified only by administrators.

When changing rights, exercise caution because services that use policy files normally run at an elevated privilege level and have access to data to which the typical user may not have rights.

Setting rights on media. If a user does not have read access to a media, then no search/retrieve from media is granted. If a user does not have write access to media, then the user cannot ingest data to media.

Similar to the setting of rights, most of the functionality that is set by default in Phoenix EXP can be modified to meet your unique ILM requirements.


Phoenix EXP is equipped with resources to incorporate pre-processing or post-processing scripts into storage, migration, replication, and purge policies. It is also equipped with internal APIs to link the storage management engine to other front-end or back-end applications.

Command Lines

From your suite of product applications, there are application executable files and dynamic link library files (.dll). Notice that some of the executable files have a "Cl" at the end of the name - for example, VaultAdminCl.exe.

The "Cl" stand for command line. If you click the "Cl" version application executable, the application runs via a terminal. This functionality is intended for advanced users to do scripting or diagnostic work within the application.

In addition, there are services that are running all the time in the background. The command line arguments that you can work with are application startup arguments for advanced users to perform diagnostics on the program.

For more information see Command Line Arguments.