contents

software
 
Clearpace Releases NParchive

Clearpace has released NParchive, its new archive data store. NParchive enables IT teams to reduce storage consumption and guarantee long term immediate access to structured data that must be archived. This is a task which RDBMS and data warehouses are not designed for or suited to as they are optimised for transactional processing or high performance analysis.

The NParchive software compresses structured data from databases, data warehouses or log files into archives that are 95 per cent smaller than the data source, time-proofing the data and ensuring immutability. The NParchive data store that is created can be hosted on any storage architecture and remains fully searchable by native queries.

Research by storage analysts ESG highlights the growing pressure on IT managers to provide long term access to structured data. Their recent report states that in 2005 none of their survey respondents had been required to provide structured data in response to discovery requests. By 2007 this figure shifted to 57 per cent. Typically, retrieving archived data necessitates restoring data from tape, sometimes the reconstruction of old hardware configurations and almost always the redeployment of developers who should be working on revenue generating initiatives.

Clearpace has designed NParchive specifically to make meeting this growing need simpler and less costly. The software creates an immutable data store that allows data to be added incrementally to the archive store but never changed. The store is enhanced with MD5 fingerprinting that allows tamper detection and integration with CAS type devices.

The ability of NParchive to track and manage changes to the schemas of archived information through a table versioning mechanism enables the archive to be time-proofed. In addition powerful Query-Point-In-Time capabilities allow the retrieval of information as it existed at a specific historic date and time.

Using native queries to speed access and automated expiry and shredding for data disposal, NParchive reduces the development and management time required to maintain and use a structured data archive. This increases the value of the archive for business analysis and cuts the cost of responding to discovery requests.

Storage consumption is growing, almost exponentially, at a time when CIOs can no longer continue to address the symptoms - slowing applications, growing power usage, rising storage costs - by continued investment in capacity. In addition to simplifying and saving on the costs of compliance, NParchive also delivers significant savings on recurring storage costs.

Each terabyte of data in a production database is typically being replicated numerous times in hot standby and development copies, meaning 4 to 6 copies of the production database are created. As a result typical fully loaded annual storage costs equate to $150k per terabyte of production data. Within most databases in the region of 80% of the data stored is inactive, so significant expenditure is incurred holding data unnecessarily in tier 1 storage. By moving the inactive data into NParchive, these costs can be reduced by 70%.

NParchive delivers these savings by achieving 20x-30x compression of data using a range of sophisticated pattern de-duplication techniques. The data is stored in partitioned tree-like structures that enable highly optimised queries using standard tools or native SQL without the need to restore the original data.



write your comments about the article :: © 2008 Computing News :: home page