×
Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by whitelisting our website.

THE IMPORTANCE OF A DECLARATIVE APPROACH IN SCALING OUT DATA MANAGEMENT SYSTEMS

starconnect
starconnect
Anton Jacobsz, managing director at Networks Unlimited

Real-time management tools like Cerebro bring some order to this mayhem of modern data management, and give business leaders the confidence to know their data is securely stored, always available, and can be rendered in a variety of different formats




Admin l Tuesday, Dec.04, 2018


JOHANNESBURG, South Africa – Companies in all industries are grappling with how to manage and effectively gain insights from the masses of data that is flowing through their various systems. In fact, data management has become a focal point for boardroom discussions these days, as data becomes the currency of our digital era.

Taking a traditional approach to backing up and protecting one’s mission-critical data may have worked in the past, but it’s not a scalable strategy for a future in which data volumes will continue growing exponentially.

A recent white paper by Rubrik explores the concept of ‘declarative’ systems (in contrast to the ‘imperative’ systems of yesteryear). Rubrik is a cloud data management company named by Gartner as a Visionary in its 2017 Magic Quadrant for Data Centre Backup & Recovery Solutions, distributed in sub-Saharan Africa by value-added distributor Networks Unlimited Africa.

“Effectively dealing with immense and diverse data volumes means we must embrace declarative principles,” says Anton Jacobsz, managing director at Networks Unlimited. “In the realm of data management, this means that administrators should simply define the desired end states (for the likes of recovery point objects, retention, replication, and archival).

“From there, one’s data management software should have all the intelligence to translate these business goals into the technical commands that must be relayed to each system, to ultimately make good on these goals. And of course, all of this must happen in real-time, at scale, and with perfect accuracy.”


Symphony conductor

Jacobsz says that declarative approaches are a far cry from the traditional era – where isolated silos of software, servers, backup appliances and metadata databases were all managed independently by administrators. “This clunky, manual way of working has today become a recipe for disaster as data management demands real-time, always-on attention.”

The white paper positions Rubrik’s intelligence layer – known as Cerebro – as the autonomous conductor standing on a podium before thousands of critical systems all eager to be protected or restored as part of the data lifecycle management symphony.

It’s a beautiful analogy, where the orchestration involves moving data from the original point of ingest, and propagating and replicating it across all the relevant locations. At the same time, the conductor is also dealing with the demands for compression, deduplication, and ultimately the archival and expiration of data once it reaches end-of-life.

The white paper goes on to explain that Cerebro’s intelligence can seamlessly connect the data sources (such as VMware, Microsoft or Oracle), to the various storage layers that one may have (such as Rubrik’s own Atlas platform, or external cloud platforms like AWS or Google Cloud).

Advertisement

“This abstracts the complexity away from the database administrators, so that everything works according to truly declarative principles,” adds Jacobsz.

Fully autonomous

So, just how is it possible for data management systems to effectively ‘run themselves’ in this declarative manner?

Jacobsz says that Cerebro incorporates a Distributed Task Framework: an engine that assigns and executes tasks across clusters of data, automatically balancing loads and resources as needed. This ensures that predefined service level agreements are always upheld.

The tough task of deduplication is elegantly handled by Cerebro’s application awareness, which enables smart data reduction decisions (deduplication, compression and incremental backups) which preserve performance levels for Instant Recovery and Live Mount operations. It’s very different from traditional, brute force deduplication services, which were built primarily to ingest tons of data, but had little idea what data they were ingesting.

Jacobsz adds that, throughout the entire data lifecycle, Cerebro enforces a strict SLA verification process – which confirms that every SLA is met as the data traverses across environments – visualised in live dashboards that give IT teams confidence that data is being managed in a secure and compliant way.

From mayhem to order

“Increasingly, we’re seeing the nature of data morphing,” notes Jacobsz. “In the past, it was typically about static records that just needed to be locked away somewhere safe. Physical media and servers worked in the era when data was constant and simple. Today, however, data is fluid and ever-changing. New insights and new data analytics technologies are rising to the fore – but to capture the advantages of these powerful new tools, modern cloud-native platforms are the only solution.

“Managing data means getting comfortable with this state of never-ending flux, and finding ways to generate insights and commit to data-driven business decisions, despite one’s data constantly morphing from day to day.

“Real-time management tools like Cerebro bring some order to this mayhem of modern data management, and give business leaders the confidence to know their data is securely stored, always available, and can be rendered in a variety of different formats,” Jacobsz concludes.

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Be the first to get the news as soon as it breaks Yes!! I'm in Not Yet
Verified by MonsterInsights