• Posted by Intent Media 02 Dec

RAID0 and stripe sizes

This will hopefully be the first in a series of posts that relate to a recent project we undertook – improving our build cycle from over 1 and 1/2 hours to 30 minutes.

The project touched the full vertical build stack – hardware and hardware virtualization, CI configuration and software. Every improvement in every slice of this vertical contributed to a speed increase of some kind.

Today, the topic is RAID0 and disk stripe size… Why? The answer is simple, your hardware could be a speed bottle neck.

Let’s start off with a couple of definitions:

Q. What is RAID0?

A. “RAID 0 offers striping with no parity or mirroring. Striping means data is “split” evenly across two or more disks. For example, in a two-disk RAID 0 set up, the first, third, fifth (and so on) blocks of data would be written to the first hard disk and the second, fourth, sixth (and so on) blocks would be written to the second hard disk. RAID 0 offers very fast write times because the data is split and written to several disks in parallel. Reads are also very fast in RAID 0. In ideal scenarios, the transfer speed of the array is the transfer speed of all the disks added together, and limited only by the speed of the RAID controller.” – http://www.diffen.com/difference/RAID_0_vs_RAID_1


The article above explains the Performance gains that RAID0 offers – it’s pretty light reading… 

Q. What is the Stripe Size?

A. “A stripe is the smallest chunk of data within a RAID array that can be addressed. People often also refer to this as granularity or block size… The stripe size also defines the amount of storage capacity that will at least be occupied on a RAID partition when you write a file. For example, if you selected a 64 kB stripe size and you store a 2 kB text file, this file will occupy 64 kB.” – http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS,1735-4.html


The stripe size article has many great tidbits – in particular the performance benchmark graphs.

So now that we understand RAID0 and stripe sizes, let’s dive in…

Our current CI bare metal machines are very similarly specced – 32GB, 2 x 128GB HDDs – RAID0. We use Openstack to manage our virtualization and each bare metal machine is normally virtualized into 4 separate VMs.

As part of this project, we added a new bare metal machine with almost identical specs. We were pretty excited to add 4 VMs to the CI build cluster – until … under load, the new VMs were twice as slow as existing VMs.

We were unsure of the cause, and we clutched at straws for a little while until we bit the bullet and installed Munin. Munin became invaluable in determining the potential cause – a big thumbs up. So what did Munin tell us? Our bottleneck was IO.

“But the specs of the hardware are the same as the other machines, which are performant?!” – This statement was almost true. As it turned out the HDDs were a newer model of the existing HDDs. The specs were slightly different, but they didn’t play a real part in the end.

What did we try?

– Benchmarking the RAID array against the other RAID array – slower

– Benchmarking each individual HDD – almost no difference (due to the different HDD models)

Clearly, the problem was with the RAID array… At this point we brushed up on our RAID and stripe size knowledge. We were headed in the right direction – checking the stripe size for the new machine, we found that it was set to 256kB.

Almost all of the the work that our machines do, involve small, but many, reads/write and as such a stripe size of 256kB seemed a little out of the ordinary… A clue in our hunt for the IO bottleneck.

Our RAID setup is hardware RAID (we won’t discuss the difference between hardware and software RAID…) and that meant that in order to change the stripe size we had to rebuild the machine – since the configuration option is within the BIOS – and once it’s set, it’s set.

So we set it to 128kB, rebuild the machine and ran our benchmarks once more – BOOM! The culprit is found…

We tried several other stripe sizes, 4kB, 32kB and 64kB – and eventually settled on 64kB as it seemed to be the most performant. We found no real difference between < 64kB but a big difference between 64kB and 256kB.

So, if you’re planning on setting up a machine with a RAID0 array – and there are many reasons why you would want to do so, the purpose of the machine is very important, and so is the stripe size. 🙂 

Adrian CB
Software Engineer

Post Comments 0