This content is brought to you from vRetreat day 2018.
Zerto released first product in 2011 (hypervisor based replication). Software based replication product to replicate VMs between different VMware versions. The product evolved and moved more from a hypervisor replication product to a more “IT Resilience platform” by introducing more hypervisors and multi-cloud support.
Most traditional DR platforms are about planning for the unplanned scenarios in most cases. Now with the whole pleather of digital transformation the planned elements such as migrations, mergers acquisitions etc are more and more common than the unplanned. Resilience in the Zerto framework is broken down into two areas:
- User errors
- Infrastructure failures
- Security & ransomware
- Natural disasters
- Mergers & acquisitions
- Move to the cloud
- Datacenter consolidation
- Maintenance & upgrades
In its simplest form this is basically reactive vs proactive responses.
The Zerto resilience platform is based on 3 pillars:
The core of the Zerto platform is continuous data protection in an efficient way. It is basically a way to replicate data from A to B, or A to B and C.
Anything older than 30 days is delivered by the longterm retention block.
Overlaying the continuous data protection, Zerto have orchestration and automation to allow you to deliver multi cloud, workload mobility without disruption. Surrounding the entire platform is analytics and control.
Multi-Cloud, Hybrid Cloud
Zerto support and promote their capability for multi-cloud capabilties, as well as traditional on premise(s) and hybrid deployments.
With 6.0 release, a single platform for all routes of delivery across hybrid, multi, private and public.
The following sections outline what each of the high level components of ZVR 6.0 will provide:
Any2any mobility – move in and out of the platforms, migrate and become mobile across the various routes of delivery.
- Azure to azure, fail-back from AWS, public to public cloud replication, private to private, private to public etc.
- Any2any: azure region to region intra cloud replication now supported. Including within Germany, or Germany to china whilst adhering to relevant governance.
- How does zerto use the Azure APIs?
- VM running in Azure. Take azure snap, sync with a target and all changes written into a journal. This is then forwarded/replicated to a ZCA (Zerto cloud appliance). On Premise(s), you have ZVM, VRA, VBA, cloud just uses a ZCA removing this complexity.
- In azure use block blobs for journals
- Page blobs for change/replica
- Utilise azure vNet peering for azure to azure replication. NOTE; some locations cannot talk together, for example Germany does support vNet peering for all regions.
- In AWS this is different to Azure due to lack of access to the changed blocks only.
- AWS loves data so long as it is in AWS. Getting out of AWS is great fun.
- Originally Zerto could do automated fail to AWS but manual back, moving forward you will be able to orchestrate fail back out for AWS.
- How does it work with AWS:
- You have Elastic Block Store (EBS), so to integrate with AWS Zerto releases zASA and zSAT. These two components work together to allow a delta sync and replication to the ZCA. Again, to allow the changes only to be replicated!
- The zASA and and zSAT are spun up and down when required to help prevent bill shock within AWS.
- Uses S3 in AWS for storage to keep costs low
- Only support on demand instances in AWS
- Azure to AWS you need to preinstall your AWS components. Move to Azure is fine.
- With azure we can have one to many replication. Coming from AWS it is not one to many, just one to one.
- Zerto can assist you to adopt a multi-cloud strategy.
- How does zerto use the Azure APIs?
- Allows you to see your entire Zerto delivery across multiple clouds, private, public and hybrid. Single pane of glass management based on HTML5.
- A new feature is network analysis – understanding the impact of the replication on the network and bandwidth requirements. This also shows a 30 day network history to allow you to plan, forecast and see rate of change and bandwidth consumption. This is key, especially when using public cloud resources as you in some instances will be paying for ingress and egress traffic!
- You can also monitor IO utilisation on the platform, for environments that may charge you based on transactions or disk performance impact.
- Zerto now utilises single sign on, but this is currently not MFA aware.
- The zerto analytics App is a read only version, so that your son by mistake cant click failover on your behalf!
- Zerto currently today don’t do predictive forecasting based on the analytics. But watch this space….
- Zerto today don’t do costing analysis either but watch this space…
JFLR (journal file level recovery)
- Architecture now supports Linux as well as windows!!!
- Journal overview
- Zerto creates checkpoints every 5seconds. You can recover a file created 1 minute ago but lost 10 seconds ago. Assists you in recovering from ransomware maybe??
- Maybe use Zerto for snapshot functionality that isn’t fully possible in vSAN. Replicating the functionality of SAN based snapshot.
- Removed 2TB file system size limitation
Continued scalability – grow and shrink with your needs.
- Support 10,000VMs within each ZVM/vCenter pair. Same for hyperV etc
Cloud service providers can now upgrade customers nodes without customers having to be involved. This is delivered centrally as part of a potential managed service.
6month release cycles typically.
Zerto are trying to be as open as possible so people can link their existing orchestration layers or analytics tools into the Zerto platform. Testing of the APIs, for example, the analytics API was tested by Zerto first and used as part of the product build. Once they were happy with its stability and consistency they released it as a public API.
Licensing is based on 2 different editions, on a per VM model:
- ZVR (one to one)
- ZECE (multi)
If you can make ZertoCON it is worth looking at attending:
Overall, I personally like the Zerto platform, especially due to the open framework they try to work too, but also due to their vision that some customers may need on premise coverage, multi-cloud etc. For me this covers a lot of the bases someone would look at to deliver resiliency into their datacenter services.
Many thanks to @zerto for this session.