The other values are 8MB for the 16TB+ option, 512KB for LAN target and 256KB for the WAN target. The default value, listed in the interface as “Local Storage” under Storage optimization is 1024KB: Since in many situations the different volumes are still carved out from the same RAID pool, this pool has a unique stripe size value.Īs you can read however in the aforementioned whitepaper, Veeam uses a different and specific block value. This is because they have to be a general purpose solution able to manage at the same time different workloads: datastore for VMware volumes, NFS or SMB shares, archive volumes. Many storage arrays use a default stripe size around 32 or 64k. I just published a dedicated white paper where I explain the different backup modes in Veeam, how they have huge differences in their I/O profiles, and how the choice of one method over the other can have a great impact on the final performances.Īnother huge factor, often overlooked, is the stripe size of the underlying storage. Especially when it comes to I/O intensive operations like Reversed Incremental or the transform operation done during synthetic fulls or the new forever forward incremental, people see their storage arrays performing at low speed and backup operations taking a long time. 0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email - 0 Flares ×įrom time to time, Veeam users in the forums ask for help on unexpected performances in their backup operations.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |