Skip to content

PART 1 – Houston we have a problem! And it is called capacity!

The capacity problem…..As you may have gathered I am a bit of a film/TV junky, and this week I remember seeing some clips from Apollo 13. I personally enjoy settling down and watching a film or TV programme no matter what the genre; this is my personal escapism… Unfortunately for me, when I watch a film I am still thinking about technology and WORK! Which leads to me creating these blogs, Is this a disease or an obsession? I don’t know, but I enjoy what I do and I hope it isn’t contagious!

This blog will be a three part series that address why storage is one of the areas that businesses are struggling to keep up with from a capacity, performance and manageability of storage systems and how these are changing. This blog will cover point one, Capacity.

The evolution of data and storage impacts

A number of customers I have spoken to over the past few years have been interested in better ways to predict data growth, and theorise on what size of storage system or type of storage system they require (NAS/SAN). They predictably keep buying shelves which are consumed almost instantly, sometimes this is down to how it may have been configured or tiered, but generally the underlying question is,

“How can I size and predict my storage system accurately, and will it still be current in 5 years?” This is surely an impossible question to ask, or re-phrased; with the pace of technology change, we can no longer predict that far in to the future. If we think back 5 years almost to the month, Apple had just launched the WiFi enabled iPad 1 – scary isn’t it?

The big problem with this is, how can you predict an ever changing consumption of data capacity, as the way we use data, whether it is personal or corporate, has evolved in our consumption and generation of data. Most recreational or workplace tasks usually involve some form of data capture or backend system to function. For example; I am a keen cyclist and over the past few years have moved from tracking my route, speed and distance via google maps or guessing; to an automated tracking system using Strava or my Garmin Forerunner watch. These systems are monitoring everything about my journey from the moment I start the timer, and of course this data needs to be stored somewhere accessible!

You may have transactional data, machine data, unstructured data and “Big Data” (cringe) but that just means, how are you going to potentially structure it next, with regards to capacity, performance, tiered disks, maximum workloads and throughput?

A major issue and consideration around storage and capacity is the backup, retention periods and the world of pain of having to keep data forever, which is an expectation within some organisations, by the next generation workforce. How can you afford to keep this data backed up and retained for months, years, and maybe decades? If you consider the internet this could mean storing data for centuries, imagine the storage in use, but more importantly the access!

The scalable future

My background was typically within the EUC space of the IT industry but found myself drawn to Data Centre design, virtualisation and storage as all of these are utilised within a EUC solution, which gave me a natural career progression to being a solutions architect.

Since my career started, I have utilised EMC, HP, Dell, IBM and NetApp storage systems in some shape or form but one solution/option that I keep going back to is the NetApp FAS range, especially with its clustered data ONTAP (cDOT) operating system that was released last year.

cDOT at a high level allows NetApp to cluster its FAS storage systems. This gives three major benefits over its previous DATA ONTAP operating system, but the key one for this first blog is the seamless scalability and how the new operating system allows scale up and out methodology for storage and the capability to move data to new disks for growth with no disruption. This in theory allows IT leaders to invest in a storage solution that can grow to sizes that may have not been possible to them before with minimal impact to manageability and BAU functionality. This cDOT approach can be delivered across the entire NetApp FAS range which means smaller business that intend to have rapid business growth can benefit/plan/estimate for the future also!

Exploring Storage Alternatives

IT leaders are also exploring alternative solutions to look into allowing scalability from an OPEX expenditure rather than CAPEX expenditure.

NetApp offer an On Demand Advantage model to assist IT leaders with this. This allows a pay as you grow model with an OPEX model on the base system you require, with additional shelves and capacity being provided from day one that you pay an on demand pence per Gigabyte model. If you are utilising the extended system capacity for a longer period of time NetApp will adjust your monthly cost to accommodate this growth. This solution is favourable for businesses that are expecting rapid data growth and would like to offer a potential corporate drop box solution that may require an unknown amount of capacity.

Another alternative approach is cloud based storage, again an OPEX expenditure model but allows you to consume capacity as a service like electricity, water etc. The benefit to a cloud based model is you request what you wish for and the provider would generally manage the backup functionality, maintenance, lifecycle and performance. The only problem with this model from my opinion is that most cloud based storage solutions are driven towards providing capacity and not performance, unless you are looking for a DRaaS or IaaS approach.

Conclusion

Straight forward conclusion from my opinion and it is; nobody knows how quick your data will grow, yes you can predict based on past years if you have been capturing that data, but what new system(s) or processes maybe implemented that increases or decreases that requirement of capacity in the next 3-6-12month…DO NOT size for 5 years. Purchase a system that gives you what you need now plus 20-30% growth, but allows you to scale in the future with no massive overhaul. My other opinion is if you are worried or cannot get a good indication of growth, then implement storage as a service model, whether it is on an, on demand advantage model or cloud based offering. For example; backups can take up a considerable amount of capacity and man time (especially if they fail), why not palm this off to a Backup as a service provider? Move that operational risk and testing to someone else, freeing up time and capacity.

The next blog in this series will cover my views on performance sizing and planning for the future.

 

Please follow and like:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error

Enjoy this blog? Please spread the word :)