Tag Archives: Backup mode

SAP HANA Backup with Veeam

Hi,
my colleague and friend Tom Sightler created an toolset to backup SAP HANA with Veeam Backup & Replication. He documented everything in the Veeam Forum:
https://forums.veeam.com/veeam-backup-replication-f2/sap-b1-hana-support-t32514.html

Basically it follows the same way that storage systems like NetApp use for Backup of HANA. You implement in Veeam Pre and Post Scripts that makes HANA aware of the Veeam Backups. As well Logfile Handling is included (how many backup data do you want to keep on HANA system itself?).

In case of a DB restore, you go to HANA Studio and can access the backup data on HANA system directly. If you need older versions you can restore them with Veeam File Level Recovery Wizard or more comfortable with the Veeam Enterprise Manager File Restore (Self Services) and hit the rescann button at HANA Studio restore wizard. They are detected and you can proceed with the restore.

 

CU andy

Interview with Anton Gostev about “Agentless” Backup

Hi everybody,

as you might know Veeam do not install backup agents on the VMs to process application aware and application- and filesystem consistent backups. Veeam looks into the VM and it´s applications and register plus start an according run time environment that allow application aware backups.

We had lately an internal discussion about this topic and Anton Gostev Vice President of Product Management at Veeam Software allowed me to share his thoughts and ideas behind Veeam’s unique approach.

Andreas Neufert:  “Let´s talk first about the definition of Agents. According to http://en.wikipedia.org/wiki/Software_agent an Agent is defined as an installed software piece that stays on the servers. Veeam´s unique functionality register (install) start and unregister (uninstall) his run time environment just for job processing. Anton why do you think this is better than installed agents? ”

Anton Gostev: “All problems which cause issue known as “agent management hell” are brought by the persistency requirement
…(of that Agents from other solutions)…

– Need to constantly deploy agents to newly appearing VMs
– Need to update agents on all VMs
– Need to babysit agents on all VMs to ensure reliability (make sure it behaves correctly in the long run – memory leaks, conflicts with our software etc.)
Auto-injected temporary process addresses all of these issue, and the server stay clean of 3rd party code 99.9% of time.”

Andreas Neufert: “I think we all were at the point where we need to install a security patch in our application and have to wait till the backup vendor released a compatible backup agent version. Or I can remember that we have to boot all Servers because of a new version of such an agent (before I joined Veeam). But what happens if the Application Server/VM is down?”

Anton Gostev: “… Our architecture address the following two issues …
– Persistent agent (or in-guest process) requires VM from running at the time of backup in order to function. But no VMs are running 100% of time – some can be shutdown! We are equally impacted, however the major difference is that we do not REQUIRE that in-guest process was operating at the time of backup (all item-level recoveries are still possible, they just require a few extra steps). This is NOT the case with legacy agent-based architectures: shutdown VM means no item-level recoveries from the corresponding restore point.
– Legacy agent-based architectures require network connectivity from backup server to guest OS – rarely available, especially in secure or public cloud environments. We are not impacted, because we can failover to network-less interactions for our in-guest process. This is NOT the case with legacy agent-based architectures: for them it means no application-aware backup, and no item-level recoveries from the corresponding restore point.

Andreas Neufert: “Everyone who operate a DMZ knows the problem. You isolated the whole DMZ from your normal internal network, but the VMs need a network connection to the backup server which hold as well data from other systems. So the Veeam approach can bring additional security to the DMZ environment. Thank you Anton!”

Thanks for reading. Please send me comments if you want more interviews on this blog.

Cheers… Andy

vCenter connection limitation and backup in big environments

Hi Team,

Update from 2019-05-20: Since some years the below SOAP modifications within vCenter are not needed anymore as Veeam caches all needed vCenter information in RAM which reduced the vCenter connection count drastically at the backup window. See Broker Service note here: https://helpcenter.veeam.com/docs/backup/vsphere/backup_server.html?ver=95u4

My friend and workmate Pascal Di Marco ran into some VMware connection limitation while backing up 4000VMs in a very short backup window.

If you ran a lot of parallel backup jobs that use the VMware VADP backup API you can run into 2 connection limitations… on vCenter SOAP connections and on some limitation on NFC buffer size on ESXi side.

All backup vendors that use VMware VADP implement in their product the VMware VDDK kit which help the backup vendor with some standard API calls and it also helps to read and write data. So all backup vendors have to deal with the VDDK own vCenter and ESXi connection count in addition to their own connections. VDDK connections vary from VDDK version to version.

So if you try to backup thousands of VMs in a very short time frames you can hit these limitations.

In case you hit that limitation, you can increase the vCenter SOAP connection limitation from 500 to 1000 by this VMware KB 2004663 http://kb.vmware.com/kb/2004663
EDIT: In vCenter Server 6.0, vpxd.cfg file is located at C:\ProgramData\VMware\vCenterServer\cfg\vmware-vpx

As well you can optimze the ESXI Network (NBD) performance by  increasing the NFC buffer size from 16384 to 32768 MB and optimize the Cache Flush interval from 30s to 20s by VMware KB 2052302  http://kb.vmware.com/kb/2052302

Link: Pernixdata + Veeam Scripts for Direct SAN processing

Hi everybody…

My friend and workmate Preben created some cool scripts to use the VMware VADP Direct SAN mode together with Pernixdata write caching.

The Problem here is that Pernixdata commits writes out of the cache and not all data is on disk to process VADP based backups in Direct SAN mode.  The provided scripts just disable the caching for the time of backup

You can find the post here:
http://poulpreben.com/veeam-direct-san-backups-and-pernixdata-fvp/

Prioritisation of Veeam Backup & Replication Proxy Modes from my field experience.

Update 1: 23.05.2016 => Veeam Backup & Replication v9 + new best practices.
Hi everybody,
just want to share with you a short list of Veeam Backup & Replication Proxy modes, because I got so many questions about it in the past.
VMware Backup from FibreChannel Block Storage.:
Priority 1:
For most common VMs (90%) I would use Veeams Direct Storage (Direst SAN) backup mode at backup and HotAdd (implement virtual Proxies) at restore for best performance.
For the biggest VMs (10%) with high change rates use Veeam Storage Integration (Backup from Storage Snapshot) to optimize VMware SnapShot commit processes. This feature is available for HP 3PAR StoreServe / HP StoreVirtual incl. VSA / NetApp ONTAP systems and EMC VNX(e). Nimble will follow this year. If you do not have this feature, use standard processing from above.
As Direct SAN need FibreChannel Access and FC passthrough is not really supported, you need physical Veeam Proxy Server.
Priority 2:
If you want to use virtual only infrastructure, go with 10GbE Interfaces at VMkernel, 10GbE Veeam Proxy Servers and use the Veeam Network Mode (NBD) mode. This mode is limited for a maximum throughput of 40% of the VMKernel Interface (at multiple parallel streams). You can use HotAdd for faster restore.
Priority 3:
Use Hotadd if you want to go with virtual proxies and there is only a 1GbE network.
What you should not do:
Avoid HotAdd backup processing in big environments. By design of VMware it will bring extra load on vCenter and singnificantly increase the chance that VMware get lost on his own snapshots (orphaned snapshots). As well by design of VMware VM stuns can happen at snapshot commit. If you really want to go with it, consider  ESXi bound Veeam Proxies with special Veeam registry setting. Ask Veeam Support or a SE for design and Reg Key.
VMware Backup iSCSI Block Storage.:
The priority list is the same then FC Block Storage above.
As it is iSCSI you can use virtual Direct Storage (Direct SAN) servers which should be priority 1 if you want to go with virtual Veeam Proxies. However physical Server reduce the load on your VMware Servers significantly.

VMware Backup from NFS (File) Datastores:

Priority 1:
For most common VMs (90%) I would use Veeams Direct Storage (new Veeam Direct NFS) backup mode for backup and restore. Direct NFS is the fastest restore method within Veeam as it is written from scratch by Veeam and do not leverage the VMware VDDK kit.

For the biggest VMs with high change rates use Veeam Storage Integration (Backup from Storage Snapshot) to optimize VMware SnapShot commit processes. This feature is available for  NetApp ONTAP systems and EMC VNX(e) (HP 3PAR and StoreVirtual do not have a NFS options). Nimble will follow this year. If you do not have this feature, use standard processing from above.
You can use virtual or physical Servers for processing. However physical Server offload the backup load from your hosts.
Priority 2 (or better say “No priority”):
As there is no downside of using Direct NFS method I highly recommend to use it. However if you need another backup method, go with 10GbE Interfaces at VMkernel and Veeam Proxy Servers in Network Mode (NBD). This mode is limited for a maximum throughput of 40% of the VMKernel Interface. You can use Direct NFS or HotAdd for faster restore.
What you should not do (in no way!):
Avoid HotAdd backup processing in ANY NFS  environments. By design of VMware it will bring extra load on vCenter and singnificantly increase the chance that VMware get lost on his own snapshots (orphaned snapshots). As well by design of VMware VM stuns WILL happen at snapshot commit, specifically within Linux VMs. If you really want to go with it, consider  ESXi bound Veeam Proxies with special Veeam registry setting. Ask Veeam Support or a SE for design and Reg Key.
 

Veeam Backup & Replication Proxy Mode Autodetection process works like this:

It will check Direct Storage Mode (Direct NFS/Direct SAN) first, 
then it will try HotAdd (Virtual Appliance Mode) and the it will use
NBD (Network Mode).
 
So if you want to use 10GbE NBD Mode instead of HotAdd as default, you have to select it manually at the Veeam Backup & Replication – Backup Infrastructure – Proxy settings.