Discovering Laborious Disk Bodily Geometry via Microbenchmarking « Weblog
Trendy exhausting drives retailer an unbelievable quantity of knowledge in a small area, and are nonetheless the default selection for high-capacity (although not highest-performance) storage. Whereas exhausting drives have been round because the Nineteen Fifties, the present 3.5″ type issue (truly 4″ large) appeared within the early Nineteen Eighties. Since then, the capability of a 3.5″ drive has elevated by about 106 occasions (from 10 MB to about 10 TB), sequential throughput by about 103 occasions, and entry occasions by about 101 occasions. Though the fundamental idea of spinning magnetic disks accessed by a movable stack of disk heads has not modified, exhausting drives have develop into far more advanced to allow the elevated density and efficiency. Early drives had tens of hundreds of sectors organized in lots of of concentric tracks (sparse sufficient for a stepper motor to precisely place the disk head), whereas present drives have billions of sectors (with hundreds of defects), packed into lots of of hundreds of tracks spaced tens of nanometers aside.
Past simply the high-level efficiency (throughput and search time) measurements, which drive traits will be characterised utilizing microbenchmarks? I had initially got down to detect the variety of platters in a disk with out opening up a disk, however in trendy disks this simple-sounding process requires measuring a number of different properties earlier than inferring the depend of recording surfaces. Characterizing disk drive geometry has been achieved up to now [1, 2], and the algorithms I used aren’t very totally different. Nevertheless, older algorithms usually make assumptions which can be not true on trendy drives. For instance, the Skippy [2] algorithm (a quick algorithm to measure the variety of surfaces, cylinder change occasions, and head change occasions) not works on trendy drives as a result of the algorithm assumes one explicit ordering of tracks onto a number of platters that’s not used on trendy disks (that a number of head switches happen earlier than a search to the subsequent cylinder).
Laborious disk drives retailer knowledge on a stack of a number of rotating magnetic disks. Knowledge is written in concentric tracks. A stack of learn/write heads transfer (radially) throughout the disks to place the pinnacle over the specified observe. There are sometimes two heads per platter (one for both sides), and the whole stack of heads transfer collectively as a single unit. Studying knowledge happens by transferring the disk head to the specified observe (a search), ready till the start of the specified knowledge passes underneath the disk head, after which persevering with to learn sequentially till both all the requested knowledge is learn, or the tip of the observe, when the pinnacle must be moved to the subsequent observe. A tough drive’s “geometry” describes how knowledge is organized into platters, tracks, and sectors. Traditionally, this was described utilizing three numbers: Cylinders (variety of concentric rings from exterior to inside), Heads (variety of recording surfaces, or the variety of tracks per cylinder), and Sectors per observe, resulting in the well-known acronym CHS. The capability of a tough drive in sectors is solely C×H×S. As we speak, C and S are variable and solely H continues to be fixed. The variety of tracks just isn’t essentially the identical on every recording floor, and the variety of sectors per observe varies throughout the disk (extra sectors within the longer outer tracks than the inside tracks).
This text describes a number of microbenchmarks that attempt to extract the bodily geometry of exhausting disk drives, and some different associated measurements. These measurements embody rotation interval, the bodily location (angle and radius) of every sector, observe boundaries, skew, search time, and a few observations of faulty sectors. I exploit these microbenchmarks to characterize quite a lot of exhausting drives from 45 MB (1989) to five TB (2015). There isn’t any try and characterize different vital efficiency elements comparable to caching. The rest of this text begins with a background on exhausting drive geometry. It then describes the gathering of microbenchmarks, beginning with a fundamental learn entry time measurement and constructing in direction of more and more advanced algorithms. The second a part of the article presents microbenchmark measurements for every of the 17 drives that had been examined.
Abstract
- Background: Laborious drives include spinning disks, and a stack of heads. Knowledge is organized on recording surfaces (2 sides per platter), tracks, and sectors. “Cylinders” not exist in drives newer than round 2000.
- What can be measured: RPM, angular place of each sector, and search occasions, by timing particular sequences of sector reads. These fundamental measurement strategies can then be used to seek out observe boundaries, how tracks are organized on a floor, and the variety of surfaces.
- Access time: Out of the drives I examined, it takes 1.3 to three.6 revolutions to do a full-stroke search.
- Heads speed up slowly: Only a few tracks are accessible within the first revolution.
- Short stroking gives restricted discount in search occasions as a result of even brief seeks take a comparatively very long time.
- Seek time is non-trivial to measure.
- A search time plot can be utilized to look at acoustic management (AAM). AAM slows down long-distance seeks to cut back noise, however not short-distance seeks.
- Track boundaries will be discovered by looking the disk for observe skew. Newer disks use totally different densities (observe dimension) on every floor.
- Observe density and bit density will be estimated by realizing observe depend and dimension. Within the latest drive I’ve, common observe pitch is 80 nm and a mean bit is 17 nm in size.
- Combining search profile and observe dimension collectively often reveals the track layout.
- There’s a giant range of observe layouts. Outdated drives had “cylinders” (a number of head switches happens earlier than a search to the subsequent observe), however new drives use teams of adjoining tracks earlier than altering heads.
- Track skew will be measured utilizing observe boundary info.
- There’s multiple kind of skew. A cylinder, serpentine, or zone change often makes use of an even bigger skew than for adjoining tracks.
- Observe skew is often fixed from starting to finish of the drive, however not on the Maxtor 7405AV. Observe skew is often the identical on each recording floor, however not on the Seagate ST1.
- Combining the above instruments can discover and visualize defective sectors. Most disks have holes of faulty sectors, whereas some skip over whole tracks.
- Microbenchmarking is hard. Regardless of a lot effort, my algorithms don’t work flawlessly.
- Measurement results for 17 hard drive models from 45 MB to five TB on Web page 2.
Background: Laborious drive geometry
Sectors, tracks, and cylinders. A cylinder is the gathering of all tracks on the identical radius (6 tracks per cylinder proven right here, on two sides and three platters).
As seen by software program, a tough drive seems like a giant block of sectors, historically 512 bytes every (now 4,096 bytes), with little data of the bodily location of the sectors. For instance, a 300 GB drive may need 585,937,500 512-byte sectors, numbered 0 via 585,937,499. Some early exhausting drive interfaces required the drive controller on the host pc to know the bodily structure of the disk (as a result of the controller despatched instructions to maneuver the disk head). IDE exhausting drives (Built-in Drive Electronics, mid Nineteen Eighties) lastly built-in the disk controller into the drive. The built-in disk controller interprets logical (as seen by software program) sector numbers into bodily areas, which presents a easy block-of-sectors interface to the host pc whereas permitting far more advanced bodily layouts. Sadly, for software program compatibility causes, the sector quantity was nonetheless encoded as a CHS triplet (three numbers, however unrelated to the true variety of cylinders, heads, or sectors/observe of the drive) for a lot of extra years till LBA (logical block addressing, encoding a sector quantity as one quantity) grew to become common.
Whereas logically only a large block of sectors, sectors, tracks, and heads (or recording surfaces) nonetheless bodily exist. There’s simply no straightforward method for software program to find out about them. This part offers some fundamental definitions for these bodily options. Many readers could already be acquainted with these.
Sector
Knowledge is saved in blocks of equal dimension. A sector is the smallest unit of knowledge that may be learn or written to a disk. 512-byte sectors have been customary because the Nineteen Eighties, whereas new drives (round 2011) use 4096-byte sectors (branded as Advanced Format).
Sectors have further metadata which can be additionally written to the disk floor (comparable to its sector quantity and error correction codes). On drives utilizing embedded servo (all non-ancient drives), there are additionally servo patterns on the disk that are used to place the disk head. All of this occupies area on the disk floor however is invisible to the host.
Tracks
A observe is a circle of consecutive sectors positioned on one disk floor alongside one revolution of the disk. Studying sectors inside a observe is finished by having the pinnacle observe the observe whereas the disk rotates. Crossing a observe boundary requires transferring the disk head to the subsequent observe (a track-to-track search) or switching to a special head to learn a observe from a special disk floor (a head change). Previous to zone bit recording, each observe on a disk was the identical dimension (variety of sectors). Zone bit recording packs extra sectors into physically-longer exterior tracks and fewer within the shorter inside tracks. As a result of one observe (no matter size) is learn per revolution, exhausting drives have greater throughput close to the start of the drive. Observe dimension can even differ between recording surfaces, as a result of the recording floor and head high quality varies even inside a single drive.
Laborious drives (like floppy disks) use concentric round tracks, in contrast to CDs and DVDs which use a single spiral observe.
Cylinders
A cylinder is a group of tracks on a number of surfaces which can be positioned on the identical radius (If a observe is a circle, then a stack of circles of the identical diameter kinds a cylinder). On older drives, tracks on totally different surfaces had been aligned in order that accessing tracks throughout the identical cylinder solely required switching heads (a quicker electrical operation) however not transferring the heads (a slower mechanical operation). Cylinders are not significant on trendy drives. With elevated observe density, tracks on totally different recording surfaces aren’t aligned effectively sufficient to type cylinders and a head change requires a bigger head motion than transferring to an adjoining observe on the identical floor, which makes a head change slower than transferring to an adjoining observe.
Zones

Disk throughput benchmark. About 20 zones are seen. Outer tracks have greater throughput than inside tracks, and throughput is fixed inside a zone.
As a result of the outer tracks are bodily longer, extra sectors will be packed into outer tracks than the inside tracks. For simplicity, adjoining tracks are grouped into zones the place each observe inside a zone has the identical observe dimension (variety of sectors per observe). Zone bit recording offers the acquainted throughput curve the place outer tracks have greater throughput than inside tracks, with throughput reducing in discrete steps.
Historically, zones had been regarded as teams of cylinders, the place all tracks on all surfaces within the zone had the identical dimension. As a result of observe dimension on trendy drives can differ between recording surfaces, it solely is smart to consider a zone as a bunch of adjoining tracks on the identical recording floor on trendy drives.
Observe Skew

Left: No skew. Each observe begins on the identical angular place. There’s one wasted revolution after each observe as a result of it takes non-zero time to maneuver to the subsequent observe. Proper: Skew of 72° (1/5 revolution). After ending a observe, just one/5 of a revolution is wasted if the pinnacle can transfer to the subsequent observe inside that point.
Sequential accesses learn a observe over one full revolution, strikes the pinnacle to the subsequent observe, then reads for an additional revolution. If all tracks began on the identical angular place, the non-zero time to maneuver to the subsequent observe would trigger the pinnacle to overlook the start of the subsequent observe by the point the pinnacle arrives, inflicting nearly one full revolution to be wasted. To mitigate this, every observe is skewed in order that it begins considerably later than the earlier observe to provide the disk head sufficient time to maneuver from the tip of the earlier observe to the beginning of the subsequent observe earlier than the primary sector of the subsequent observe seems underneath the pinnacle. With a correct selection of observe skew, solely a fraction of a rotation is wasted in between tracks. The drives I examined had skews from 6% to 36% of a rotation, implying that sequential accesses truly spend 74-94% of the time truly studying knowledge.
Observe layouts
Observe layouts. Conventional “head-first” layouts prepare tracks into cylinders in order that sequential accesses will change heads and entry all surfaces earlier than transferring the heads to the subsequent cylinder. Newer “seek-first” layouts first transfer the pinnacle to the adjoining observe for a long way earlier than switching to the subsequent floor. There are numerous extra attainable variations that aren’t proven.
If a tough drive solely has one recording floor, there are two affordable methods to rearrange tracks onto the floor: Exterior to inside, or inside to exterior. When a drive has a number of recording surfaces, there are a lot of extra methods to rearrange the tracks onto surfaces. Prior work has given names to a few of these preparations [3, 1], however there are extra layouts in widespread use. I’ll try to increase the taxonomy to cowl extra of the widespread layouts.
After putting the primary observe someplace on (the surface diameter of) the disk, there are two apparent methods for putting subsequent tracks. Subsequent tracks may very well be first positioned on totally different recording surfaces on the identical radius (cylinder) earlier than looking for to the subsequent radius, or a bunch of tracks may very well be first positioned on the identical recording floor earlier than switching surfaces. Historically (outdated drives), switching heads was quicker than transferring the heads, so the primary choice was chosen (consecutive tracks would change heads first earlier than altering radius). In newer drives the place head switches are slower than looking for to an adjoining observe, teams of consecutive tracks are positioned on the identical floor earlier than transferring to the subsequent floor.
This selection results in two courses of observe layouts, which I’ll name “head-first” (a number of head switches earlier than a search) and “seek-first” (a number of seeks earlier than a head change) layouts.
There are two head-first layouts in widespread use. One structure cycles via all surfaces in the identical order for every cylinder (ahead, additionally referred to as conventional), whereas the opposite is to reverse the order of surfaces for every cylinder (alternating, additionally referred to as cylinder serpentine). By symmetry, there are two comparable layouts that order cylinders from inside to exterior, however I’ve not seen these used.
The seek-first observe layouts prepare teams of tracks on the identical floor earlier than switching heads. The group dimension is usually a lot smaller than the whole floor, often lots of to hundreds of tracks, and the group dimension is often pretty common. This group of tracks is typically additionally referred to as a serpentine, derived from the title given to one in every of these layouts (“floor serpentine”).
As a result of using serpentines (versus filling a whole floor earlier than altering surfaces), there are 4 widespread seek-first layouts. First, tracks inside serpentines can all the time be ordered exterior to inside (ahead search route), or the order will be reversed on alterating surfaces (alternating search route: exterior to inside on one floor, inside to exterior on the subsequent). There will also be layouts that use a reverse search route, however these had been by no means noticed. Second, just like head-first layouts, the order through which surfaces are used can repeat for every group of serpentines (ahead floor order), or the order can reverse after every group (alternating floor order). These two properties mix to create 4 choices. Two of those 4 got names in prior work: alternating-forward can be referred to as floor serpentine, and forward-forward can be referred to as hybrid serpentine.
This classification just isn’t exhaustive. For instance, the Western Digital S25 makes use of a seek-first structure with ahead search route, however the serpentine dimension and the sequence of surfaces doesn’t observe an everyday sample. This structure can’t be labeled into any of the six layouts described above.
Prior Work
There have been many earlier makes an attempt at characterizing the properties of exhausting drives utilizing microbenchmarks. This text makes one other try as a result of exhausting drives have modified through the years and there hasn’t been an try and make detailed measurements of so many drives spanning so a few years.
Most of the prior works had sensible functions in thoughts (versus pure curiosity), and had been prepared to commerce measurement velocity for a small loss in accuracy. This text is the primary I do know of that makes an attempt to make detailed measurements such because the angular place of each sector, or the dimensions and skew of each observe on a drive. Though the algorithms usually are not essentially totally different, prior work possible averted exploring these as a result of these measurements can take many hours and there may be little sensible profit for functions comparable to efficiency modelling or enhancing knowledge layouts in filesystems.
Most of the earlier works relied on the SCSI command set to collect info [4, 5]. This included utilizing the SCSI Ship Diagnostic command to translate logical block to bodily areas, and using the Search command to measure search (not entry) occasions. These strategies don’t generalize to ATA drives, and depends on drives to implement these instructions, to implement them appropriately, and supply info that’s correct. The existence and accuracy of those instructions aren’t assured as a result of they’re not important to the operation of a drive, which solely must learn and write. Microbenchmarks that solely use reads or writes are tougher to create, however keep away from many of those limitations.
Some makes an attempt at “black-box” microbenchmarks resulted in algorithms that labored on disks of their time, however not work on trendy drives [2]. The Skippy algorithm specifically assumed that drives used a conventional head-first observe structure.
The algorithms used listed here are most just like these by Gim and Received [1]. They appropriately identified that earlier algorithms usually didn’t work with new seek-first observe layouts, and described algorithms that did. Nevertheless, their algorithms had been optimized considerably extra for velocity on the expense of accuracy, and never fairly sturdy sufficient to deal with the big variety of strange behaviours in some drives. I noticed more unusual behaviour because of the wider assortment of drives examined: 17 drives from 1989 to 2015 vs. 4 drives from round 2006-2007.
What’s measurable?

Entry time measurement: Beginning at A after which studying B begins with a search, settling time after studying B’s observe, then ready for sector B to rotate underneath the disk head. Entry time from A to B is decided by the angle between the 2 sectors, plus zero or extra full rotations. Measuring entry time offers an correct measurement of the angle between two sectors, however not a direct measurement of the search time.
With out inside entry to the drive, measurements can solely be made by sending instructions and timing their response time. The fundamental instrument I exploit is to measure the delay between two sector reads, with the disk cache disabled so the timings replicate the bodily properties of the disk and never the cache. I didn’t use writes as a result of writes destroy knowledge on the drive, and there may be much less certainty on the precise time at which writes truly write to the disk media.
The time between studying two sectors is determined by the space between the sectors. After finishing the primary sector learn (A), the time till finishing the second learn is the time it takes to maneuver the disk head to the second sector’s observe, ready till the second sector arrives underneath the disk head, then studying the sector (B). An vital commentary is that this entry time is the time to rotate the disk by the angle between sectors A and B, plus zero or extra full revolutions. Because of this if I do know the disk’s revolution time (which is definitely measured), I can get a high-accuracy measurement of the angle between any two sectors (to some microseconds or about 0.1°). This additionally signifies that it’s tough to isolate out the search time part of the entry time, as a result of search time impacts entry time solely by figuring out what number of additional revolutions are wanted.
This elementary instrument of measuring the angular place of sectors can be utilized to construct many different measurements.
- RPM: Repeatedly learn the identical sector (The angle is precisely 360°). This takes precisely one revolution of the disk per learn.
- Angular place: By studying two sectors, the time between the 2 reads (modulo the revolution time) offers the angular place of the second sector relative to the primary sector. This enables mapping the angular place of each sector on the disk, see the place tracks begin and finish, and observe the quantity of skew between tracks (distinction in beginning angular place of adjoining tracks).
- Observe dimension: Since we are able to observe skew to seek out observe boundaries (observe skew happens at observe boundaries), we are able to discover the observe dimension for every observe and the situation of zones (teams of tracks with the identical dimension). Surprising modifications in observe dimension can even point out tracks which have faulty sectors.
- Search time: The delay (entry time) between studying one sector and one other sector is the sum of the search (together with settling) time and the delay to attend till the specified sector seems underneath the pinnacle (rotational latency). To measure search time, search close by sectors to seek out the sector that provides an area minimal of entry time. This sector is the one which seems underneath the pinnacle instantly after the pinnacle arrives on the goal observe (if the sector arrived any sooner, the pinnacle would miss the sector and would anticipate another full revolution). For this goal sector, the rotational latency is close to zero, making the entry time roughly equal to the search time.
- Search profile: A plot of the search time from a reference level (often sector 0) to every observe on the disk is a search profile or search curve. If we assume {that a} bigger head movement takes longer, a search profile can reveal the association of tracks onto the floor, and likewise the variety of recording surfaces.
- Variety of surfaces: Lastly! Inferring the variety of surfaces primarily requires realizing the observe structure.
Outcomes Abstract
The desk beneath summarizes the measurements of the 17 exhausting drive fashions I examined. The later sections will go into extra element on every metric, the way it’s measured, and normal observations about every metric. Detailed outcomes from every drive are offered on web page 2.
The Toshiba DT01ACA300 and P300 are possible an identical drives with totally different branding, so measurements throughout these 4 give some sense of how a lot variability there may be between drives of the identical mannequin.
Mannequin | Capability | RPM | Sector Dimension (bytes) | Sectors/Observe | Tracks | Surfaces | Skew (°, rev.) | Max. search (ms) | Avg. observe pitch (nm) | Observe structure |
---|---|---|---|---|---|---|---|---|---|---|
Seagate ST-157A 3.5″, stepper |
44.7 MB | 3602 | 512 | 26 | 3,360 | 6 | 0 | 63 | 40,000 | F or A |
Maxtor 7405AV 3.5″ |
405 MB | 3548 | 512 | 123 – 66 | 7,998 | 3 | 93° – 100° | 30.5 | 10,000 | AF or AA |
Seagate Medalist ST ST51270A 3.5″ |
1.28 GB | 5371 | 512 | 145 – 76 | 21,599 | 4 | 129° | 22.2 | 5200 | F |
Seagate Medalist Professional ST39140A 3.5″ |
9.1 GB | 7209 | 512 | 297 – 169 | 72,048 | 8 | 85° | 18.3 | 3100 | F or A |
Samsung SV0432D 3.5″ |
4.3 GB | 5399 | 512 | 403 – 231 | 24,460 | 2 | 60° (7/42 rev) | 19.4 | 2300 | F |
Seagate ST1 ST650211CF 1″ |
5 GB | 3606.5 | 512 | 335 – 183 | 37,782 | 2 | 109.3° (17/56 rev) 116.7° (18/56 rev) |
26.3 | 280(1) | AF |
Toshiba MK8034GSX 2.5″ | 80 GB | 5399.8 | 512 | 891 – 429 | 219,635 | 3 | 51.8° (19/132 rev) | 19.1 | 230 | AF |
Hitachi Deskstar 7K80 3.5″ |
82 GB | 7201 | 512 | 1170 – 573 | 176,275 | 2 | 67.3° (3/16 -0.0004 rev) | 25.7 | 310 | A |
Western Digital Caviar SE16 WD2500KS 3.5″ |
250 GB | 7204 | 512 | 1116 – 630 | 520,449 | 6 | 40° (2/18 rev) | 17.4 | 320 | AF |
Seagate Barracuda 7200.9 ST3160811AS 3.5″ |
160 GB | 7203 | 512 | 1452 – 638 | 281,786 | 2 | 55.7° | 24.1 | 200 | FA |
Seagate Barracuda 7200.11 ST3320613AS 3.5″ |
320 GB | 7204 | 512 | 2464 – 1200 | 329,853 | 2 | 61.3° (1/6 + 0.004 rev) | 19.9 | 170 | AA |
Western Digital S25 WD3000BKHG 2.5″ |
300 GB | 9999.6 | 512 | 1926 – 1117 | 384,085 | 3 | 24.4° (1/15 + 0.0009 rev) | 8.1 | 100 | F* |
Seagate Cheetah 15K.7 ST3450857SS 3.5″ |
450 GB | 15047.7 | 512 | 1800 – 1028 | 595,848 | 6 | 23.8° (1/15 – 0.0006 rev) | 7.3 | 150(2) | AA |
Samsung SpinPoint F3 HD103SJ 3.5″ | 1 TB | 7247.1 | 512 | 2937 – 1546 | 854,135 | 4 | 72° (1/5 rev) | 17.3 | 130(3) | FF |
Hitachi 7K1000.C 3.5″ | 1 TB | 7199.8 | 512 | 2673 – 1350 | 914,873 | 4 | 53.3° (4/27 rev) | 15.2 | 120 | AF |
Toshiba DT01ACA300/P300 3.5″ | 3 TB | 7219.1 7200.3 7200.0 7200.0 |
4096 | 473 – 211 464 – 204 485 – 207 464 – 216 |
2,075,497 ~2,098,859 2,070,465 2,076,586 |
6 | 37.2° (3/29 rev) | 21.0 21.1 20.9 20.9 |
80 | FF |
Toshiba X300 HDWE150 3.5″ | 5 TB | 7199.6 | 4096 | 500 – 229 | 3,279,583 | 10 | 22.5° (1/16 – 0.000008 rev) | 14.6 | 85 | AF |
1 The Seagate ST1 sequence manual says 105,000 TPI (tracks per inch) max, which is 242 nm per observe.
2 The Seagate Cheetah 15K.7 manual says 165,000 TPI, which is 154 nm per observe.
3 The HD103SJ manual says 245k TPI, which is 104 nm per observe. HD103SJ was rebranded as ST1000DM005 after Seagate acquired Samsung’s exhausting drive enterprise in 2011.
Rotational Pace (RPM)
The disk rotational velocity is measured by repeatedly studying the identical sector (sector 0). If the measurement is of a spinning exhausting disk, sector 0 can be learn precisely as soon as per revolution. This take a look at can be used to confirm that disk caching is disabled. If caching is enabled, the reads will hit within the cache and happen far more rapidly.
Implementation Particulars
The cache on the Maxtor 7405AV can’t be fully disabled, so repeated reads from the identical sector are serviced by the cache. Fortuitously, studying a special sector just isn’t cached on this disk, even when the sector instantly follows the primary learn. To compensate for this, the algorithm truly alternates between studying sector 0 and sector 1.
Sector Angular Place
The angle between any two sectors will be measured by accessing the 2 sectors and measuring the time between the responses of the 2 requests. This methodology often has an accuracy of some microseconds (round 0.1°), however is determined by the drive. Any search time (or random variation in search time) solely impacts whether or not the second sector will be learn in the identical revolution, or whether or not additional full revolutions are wanted, so we are able to fully take away the influence of search time by taking the rest after dividing the time interval by the revolution time. A map of the angular place of each sector on (a area of) the disk will be made through the use of a set sector (sector 0) as a reference level. Nevertheless, because it takes tens to lots of of measurements per sector, it is just sensible to do that for small areas of the disk. Angular place plots are a strong instrument for zooming in to look at different options (comparable to defect administration).
The plots above are polar plots of the angular place (Time will increase within the counterclockwise route) and observe variety of every sector, with the color of every level displaying the sector’s place throughout the observe (darkish is the beginning of a observe, whereas yellow is close to the tip of a observe). The color scheme was chosen to differentiate the start and finish of every observe, which makes the observe skew clearly seen. The tactic to seek out the observe variety of every sector is mentioned within the part on finding track boundaries. These plots use observe boundary info and a gap within the middle to make the plot look nicer, and aren’t strictly mandatory. If observe boundary info is unavailable, utilizing sector quantity because the radial coordinate ends in a similar-looking graph, however with one large spiral as a substitute of concentric circles and a spot (the observe skew) on the finish of every observe. The identical knowledge will be plotted on a cartesian axis, which might generally be simpler to learn than the polar plots right here.
The primary plot exhibits tracks 824 to 844 on the Toshiba X300. It makes use of a 22.5° observe skew: the start of every observe is about 1/sixteenth of a revolution later (counterclockwise) than the start of the earlier observe. This enables a while (1/sixteenth of a rotation) to maneuver the pinnacle and settle onto the subsequent observe after ending studying one observe earlier than starting to learn the subsequent observe. For sequential accesses, the disk would learn one observe of knowledge each (1 + 1/16) revolutions. We take a look at observe skew in additional element in a later section.
This plot additionally exhibits a head/floor change at observe 834, which is seen right here as an unusually giant skew. This plot exhibits tracks 833 and 834 as adjoining in logical observe numbers, however they’re truly on totally different surfaces and sure requires a bigger search in comparison with transferring to an adjoining observe on the identical floor, which motivates the bigger skew at serpentine boundaries.
The second plot is from a Seagate ST1, a 1″ exhausting drive in a CompactFlash type issue. In comparison with the desktop and server exhausting drives, it has a sluggish 3600 RPM and nonetheless makes use of a big 109.3° observe skew. The a lot bigger skew offers a visually totally different sample within the angular place plot. One attention-grabbing commentary is that the ST1 appears to have precisely one sector of empty area on the finish of each observe. I don’t have any guesses on what this area is for.
The third plot is from a Seagate ST-157A, a 44.7 MB exhausting drive from 1989 that makes use of a stepper motor actuator. It has zero observe skew (or maybe extra precisely, 360° of skew)! Not utilizing any observe skew possible reduces the complexity of the drive controller (which was a bigger concern in 1989). Nevertheless, the adjacent-track search time is pretty giant (8 ms specification, measured to be 10.8 ms together with < 2.5 ms controller overhead), so the observe skew would possible must be greater than 75% of a revolution (270°) anyway, and the potential efficiency enchancment for utilizing a non-zero skew is pretty small (10-15% sequential throughput). Additionally attention-grabbing is that the hole between sectors 25 and 0 seems barely bigger than between different adjoining sectors.
Implementation Particulars
--angpos reference,begin,step,finish,error (e.g., --angpos 0,0,1,10000,10)
Reference, begin, step, and finish are sector numbers. Error is the utmost customary error of the imply (customary deviation divided by the square-root of the variety of samples), in microseconds. Use the required reference sector, report the angular place of each step sectors from begin (inclusive) to finish (unique), and pattern sufficient so the usual error of the imply is lower than error.
The algorithm takes the common of a number of samples to enhance the accuracy of the measurement. It takes samples till the usual error is beneath a user-specified parameter. Accuracy and runtime will be traded by altering the specified precision. There’s additionally a velocity optimization. The fundamental algorithm would take at most one measurement each revolution, relying on how distant the 2 sectors are, by visiting the supply and goal sectors and returning again to the supply sector in a single revolution. This algorithm makes an attempt to spend two revolutions per iteration, visiting the supply sector, then making an attempt to take as many samples as attainable (of the goal and later sectors), after which returning again to the supply sector two revolutions later, to start the subsequent iteration. This algorithm can usually take 10-15 samples over two revolutions, giving 5-7× velocity enchancment over the fundamental algorithm. Permitting extra revolutions per iteration would additional enhance effectivity (with diminishing returns), however the accuracy can drop as a result of any error within the revolution time estimate is multiplied by the variety of revolutions.
Entry Time
Entry time is the time to maneuver the pinnacle and skim a sector, together with each search time and rotational latency. This algorithm measures the entry time between a pair of sectors. It is rather just like the algorithm to measure angular place, besides that the measured time just isn’t divided by the revolution time. I usually repair the sector 0 as a reference sector, then measure the entry time from the reference sector to different sectors positioned across the disk. This algorithm isn’t notably novel, nevertheless it generates attention-grabbing plots when this knowledge is plotted in polar coordinates. These plots are similar to the angular place plots (the info level areas are literally the identical), however with a special color scheme. These plots give a visualization of what fraction of the disk space is reachable inside some period of time.
Above are two entry time plots from the identical Samsung SpinPoint F3 1TB exhausting drive, displaying the entry time from a reference sector to different factors over the whole drive (tracks 0 to 854133). This plot measured one knowledge level each few thousand sectors of the 1.95-billion sector disk. Within the first plot, the reference (begin) sector is sector 0 at first of the drive, which is positioned at angle = 0 (far proper) of the plot. The second plot makes use of sector 976762583 (center of the drive by sector depend), and the reference sector can be plotted at angle 0.
The color swirl visually exhibits that the entry time is principally decided by the angular place of the goal sector, whereas search occasions decide the variety of additional revolutions required to succeed in the goal sector. Ranging from the reference sector 0 (proper (angle = 0), outside-most observe (observe 0)), the plot exhibits that the disk head can attain solely a tiny group of out of doors tracks inside half a rotation, about 19% of tracks after one full revolution, and attain the innermost observe at simply over two revolutions. Within the worst case, some sectors on the innermost observe want to attend another revolution of rotational latency after the pinnacle arrives earlier than being accessed, so the worst-case entry time is ~3.09 revolutions.
Within the second plot the place the search begins from the center of the disk (by sector quantity, not by diameter), we see the same sample. Ranging from the center of the disk, the furthest search is roughly half the space in comparison with the primary plot, so the worst-case entry time is decrease. The entry time to the innermost observe is barely longer (~2.47 revolutions) than to the outermost observe (~2.42 revolutions) as a result of the center of the disk by sector depend is nearer to the surface by bodily distance (observe quantity). The outer tracks retailer extra knowledge as a consequence of zone bit recording.
Implementation Particulars
--access reference,begin,step,finish,error (e.g., --access 0,0,1,-1,30)
Reference, begin, step, and finish are sector numbers. Error is the utmost customary error in microseconds. Utilizing the required reference sector, report the entry time to each step sectors from begin (inclusive) to finish (unique), and pattern sufficient so the usual error of the imply is lower than error.
The present implementation sends the second learn request solely after the primary one is full, so the measurement contains OS and disk controller latency. If the intent is to measure solely the mechanical efficiency of the drive, to get a extra correct measurement of short-distance quick accesses, it could be price making an attempt to ship each requests on the identical time so each reads will be carried out with out first sending the learn knowledge again to the benchmark program. This might enhance the benchmark complexity (requires utilizing asynchronous I/O and monitoring the response time of every request independently), and will must account for reordering achieved by the OS scheduler or disk controller (SATA NCQ or SCSI TCQ).
As a result of absolutely the time (not modulo the revolution time) is measured, the algorithm can not take a number of samples per revolution as achieved for the angular place measurement, and is proscribed to at most one pattern per revolution. In consequence, measuring entry time takes a number of occasions longer than measuring angular place.
Quick stroking
Trying on the two access-time plots above illustrates why brief stroking the disk solely offers a modest discount in entry time. Quick-stroking is the apply of utilizing solely a small portion of a tough drive to maintain knowledge confined to a small (often outer) portion of the drive to cut back the worst-case search distance and keep away from utilizing the inside tracks which have decrease switch charges (inside tracks have fewer sectors per revolution). A good portion of the search time is spent to start with and finish parts of the search even for brief seeks, as the pinnacle wants time to speed up, decelerate, and settle. Halving the search distance solely diminished the worst-case entry time by 22% (from 3.09 to 2.42 revolutions, or 25.6ms to twenty.0ms). Quick-stroking doesn’t truly change search occasions, however removes some longer-distance seeks from being included within the common.
For completeness, I measured the common entry time for random accesses when these accesses are restricted to some fraction of the disk. The graph beneath plots the variety of reads per second (IOPS), which is the reciprocal of the entry time when there are not any parallel accesses (no queueing). IOPS will increase when a smaller fraction of the disk is used.

Quick stroking a tough drive offers modest efficiency enhancements by sacrificing a considerable amount of cupboard space.
The plot exhibits two situations: One the place the usable area is at first of the disk (the same old case when brief stroking as a result of the start of the disk is quicker), and the opposite the place the usable area is on the finish of the disk (the slowest area). As a result of search time is extremely non-linear with distance (brief seeks nonetheless have a excessive value), the development in random IOPS is small until solely a really tiny fraction of the disk is used. For instance, when the start of the disk is used, IOPS improves by about 20% when half the disk is used, 55% when 10% of the disk is used, and doubles when solely 0.07% of the disk is used.
Now that flash-based solid-state drives are widespread, brief stroking a tough drive for efficiency makes little sense. As of July 2019, a low-end SSD is about 3 to 4 occasions costlier per capability than exhausting drives, however gives 500 occasions greater random IOPS and double the sequential throughput.
Implementation Particulars
--random-access begin,finish,iterations (e.g., --random-access 0,-1,5000)
Begin and finish are sector numbers. This measures the common entry time for doing iterations random accesses throughout the area between begin (inclusive) and finish (unique).
The algorithm reads random sectors. It measures entry time, not search time. Entry time contains search time, rotational latency (on common half the rotation interval), and controller/software program overhead (sometimes just a few hundred microseconds).
Search Time
The search time is the time it takes for the disk head to maneuver to the vacation spot observe, however not together with the rotational latency to attend for the goal sector to rotate underneath the disk head. I included head settling time within the search time as a result of settling is absolutely simply the ultimate low-speed fine-tuning portion of the search, and there’s no strategy to measure settling time individually from the search. Search time offers details about the radial location of a given observe on the disk. Seeks of an extended bodily distance often take longer than seeks of a shorter bodily distance, so search time can be utilized to find out which tracks are bodily nearer to a reference location. I often use sector 0 on the periphery because the reference sector.
Whereas it’s simple to measure entry time, search time requires discovering the sector on the goal observe that has the bottom entry time (with near-zero rotational latency). Nisha et al. [2] describe an algorithm that measures the entry occasions between one sector and plenty of sectors close to the search goal, then chooses the minimal. My algorithm improves on velocity through the use of a logarithmic seek for the sector with the minimal entry time beginning with large steps that lower exponentially (the runtime is logarithmic within the variety of sectors per observe somewhat than linear). This turns into notably vital when disks have many sectors per observe. I examined disks with as much as 2937 sectors per observe, whereas Nisha et al. solely examined as much as 356.
The search time profile is often plotted with search time on the vertical axis and search distance (variety of tracks) on the horizontal axis. The diagram to the appropriate exhibits an instance from a Toshiba P300 3 TB drive. The search time will increase roughly linearly with distance for lengthy seeks (when the search time is dominated by the pinnacle transferring at a excessive fixed velocity) and is non-linear for brief seeks (dominated by acceleration, deceleration, and settling time).
One other method to consider the search time curve is that it’s the boundary that separates color areas within the entry time plot. The search time is the entry time of the set of sectors which have near-zero rotational latency, that are these sectors that sit on the boundary between “arriving instantly after the pinnacle arrives” and “arriving barely after the pinnacle arrives and needing an additional revolution”. To show this, the 2 polar plots to the appropriate plots the search profile and entry time knowledge factors for a similar drive on the identical polar axis for comparability. The search profile is often extra helpful when plotted in cartesian coordinates.

Western Digital S25: The search profile is uncommon, indicating an odd mapping between observe numbers and bodily location. This mapping appears fairly irregular.
The 2 plots on the appropriate are examples of barely extra sophisticated search time profiles. The search time plots for the Samsung F3 (HD103SJ) and WD S25 (WD3000BKHG) usually are not clean even when zoomed out. These patterns point out that tracks usually are not strictly positioned from exterior diameter to inside diameter, however that there are some later logical tracks which can be bodily nearer to the surface (decrease search time) than some earlier tracks (greater search time). These patterns give info on the observe structure, which we’ll take a look at within the track layout part.
Implementation Particulars

The entry time of a sector is determined by its angular place. The algorithm to seek out search time searches close by earlier sectors to discover a native minimal sector with near-zero rotational latency.
--seek-track reference,begin,step,finish (e.g., --seek-track 0,0,1,-1)
Reference, begin, step, and finish are sector numbers.
The search time is the entry time for the sector on the goal observe that has the bottom entry time (near-zero rotational latency). This algorithm begins by measuring the entry time (together with the rotational latency) to an arbitrary goal sector. It then measures the entry time for an earlier sector, chosen to be possible on the identical observe however with much less rotational latency. This course of is repeated till the entry time all of the sudden will increase, which happens when the goal sector arrives too quickly underneath the pinnacle and we have to wait another full revolution. The step dimension is then halved and the search repeated till the sector with minimal entry time is discovered. There’s one widespread case the place this search fails: if the optimum sector truly falls inside a observe skew (That is labeled as “Not search time” at sector 102345 within the diagram). When this occurs, the search will then conclude that the present observe’s first sector has the bottom entry time, nevertheless it truly has non-zero rotational latency as a result of there’s a bodily hole between the present observe’s first sector and the earlier observe’s final sector (the observe skew). This implementation tries to mitigate this by making an attempt to look two adjoining tracks earlier than reporting the search time.
Just like the entry time measurement, the search time measurement sends learn requests serially and thus contains the OS and disk controller overhead.
Each the SCSI and ATA command units include a Search command that may give a extra direct measurement of search time. I’ve not tried to make use of this. This command is “out of date” for SCSI, and I don’t know its standing for ATA, nor do I understand how reliably these instructions are carried out in varied exhausting drives. (Instance: The Cheetah 15K.7 handbook claims that each the Search and Search Prolonged instructions are carried out.)
Acoustic Administration
Automated Acoustic Administration (AAM) is a technique to permit the consumer to cut back audible noise from the exhausting drive in alternate for decrease efficiency. Since a lot of the noise comes from transferring the disk head throughout seeks, AAM primarily slows down seeks (growing search time) to cut back noise. This function appeared in ATA-6 (2002) [6], however doesn’t appear to be carried out in lots of (newer?) drives.
Utilizing the flexibility to measure search time, we are able to see how AAM quiet mode impacts seeks. The diagrams present a search profile in each quick (default, in blue) and quiet (in orange) modes. For the WD SE16, quiet mode elevated the utmost search time by 55% from 17.4 ms to 27.1 ms. For the Samsung F3, the rise is 21% (17.3 ms to 21.0 ms). If we zoom in to the start of the disk, we see that the search time of brief seeks (beneath 2.2 ms for WD S25, 2.0 ms for Samsung F3) are unaffected. This is smart: Quick seeks are quiet even at full energy, and adjacent-track seeks can’t be allowed to get slower as a result of if an adjacent-track search takes longer than the observe skew, sequential throughput turns into halved.
Observe Boundaries
Discovering observe boundaries means discovering the sector variety of the primary sector of each observe. Designing an algorithm to seek out them reliably is surprisingly tough. A observe is a bunch of consecutive sectors that spans round one revolution of the disk, separated by observe skew. The algorithm identifies observe boundaries by searching for the observe skew (an unusually giant change in angular place between two adjoining/close by sectors). I’ll go away dialogue of the various methods this algorithm can fail within the Implementation Particulars part beneath.

Toshiba MK8034GSX disk throughput benchmark. Throughput is proportional to trace dimension (if observe skew is fixed).
The variety of sectors per observe is proportional to the sequential throughput of the disk. The drive spins at a continuing angular velocity, so sequential throughput varies with how a lot knowledge is learn per revolution. Throughout sequential accesses, the drive reads one observe of sectors each (1+skew) revolutions, so the sequential throughput will be calculated precisely from observe dimension, RPM, and observe skew. Earlier work has even used this relation in reverse to compute approximate observe sizes by measuring sequential throughput [1, 2]. The 2 figures above present a comparability between observe sizes and a throughput benchmark for a similar disk (Toshiba MK8034GSX), displaying that they give the impression of being very comparable.

Toshiba P300 3TB. The area close to observe 140,000 is a area with many faulty sectors inflicting the observe dimension to drop.
All newer exhausting drives use zone bit recording, so earlier tracks have extra sectors than later tracks. This sequence is often not strictly monotonically reducing as a result of the variety of sectors per observe is usually totally different on every recording floor, inflicting the observe dimension to differ in keeping with how tracks are organized onto the surfaces. For the Toshiba P300 and X300, the plot seems shaded as a result of the observe dimension switches very regularly between a number of values in keeping with which floor a observe is positioned on. The Samsung HD103SJ switches between surfaces a lot much less regularly.
Though observe dimension decreases monotonically for each the MK8034GSX and WD S25, they’re truly fairly totally different. The MK8034GSX makes use of a floor serpentine structure (search first kind AF, with a head change each ~115 tracks), however the observe dimension is the identical on each floor, so the observe dimension curve monotonically decreases. The WD S25, nevertheless, makes use of totally different observe sizes on every floor, however tracks are ordered by reducing dimension, not by bodily location. The result’s a monotonically-decreasing observe dimension curve, however extremely irregular serpentine sizes inflicting an irregular seek profile. We are going to focus on this additional within the part on measuring track layouts.
Within the above plots, the x-axis is in models of observe quantity (not sector quantity). If the observe density had been fixed, then observe quantity can be linearly associated to the bodily radius of the observe. Additionally, if the linear bit density had been fixed throughout the disk, then the observe dimension can be proportional to the size (and likewise radius) of the observe. If the above two assumptions had been true, I might anticipate to see the observe dimension vs. observe quantity plots to be a straight line. However nearly all the drives present a slight convex form. Why? My guess is that that is associated to the angle of the disk head relative to the observe (confusingly referred to as “skew angle”). Within the (often) center tracks of the disk, the disk head is designed to be aligned with the observe, however on the inside (ID) and outer (OD) diameter positions, the pinnacle is rotated relative to the observe (as a result of the pinnacle is mounted on a rotating arm), which causes the achievable bit density to be decrease on the ID and OD. Cordle et al. [7] report a lower of 8-10% in areal density functionality (ADC) at each ID and OD in comparison with the center for a PMR (perpendicular magnetic recording) disk, which appears fairly near the form of the observe dimension curve seen right here. (PMR has been utilized in most exhausting disk drives since round 2006 till at present, with HAMR (warmth assisted) coming “quickly”.)
Implementation Particulars
--track-bounds 0,-1
--track-bounds-fast 0,-1
Within the easiest case, a observe consists of a bunch of consecutive sectors, separated by a observe skew. Thus, we are able to detect observe boundaries by discovering the situation of the skew (an unusually giant change in angular place between two close by sectors). We are able to additionally assume that observe sizes are typically pretty comparable (adjoining tracks differ in dimension by lower than just a few tens of p.c). Sadly, none of those assumptions truly maintain persistently throughout all drives and all tracks. I wanted to manually repair a number of the incorrectly-detected observe boundaries.
Though most drives have non-zero observe skew, some outdated drives (ST-157A) don’t have any (or 360°) skew. Whereas an algorithm may try and detect 360° of skew and deal with that as a observe boundary, it will be extraordinarily tough to differentiate this from two adjoining sectors on the identical observe. My present implementation doesn’t work on the ST-157A, however fortuitously it’s so small that it’s possible to manually decide the observe dimension (26 sectors per observe with out zone bit recording). Even for drives with non-zero observe skew, there are tracks with totally different skew. Most drives have uncommon (generally near-zero) skew at serpentine or zone boundaries, and skew will be pretty random when a observe boundary happens together with observe slipping (defect administration) or in the midst of a block of faulty sectors when utilizing sector-slipping defect administration.
I carried out two algorithms. The primary is a O(lg tracksize) algorithm to discover a observe boundary. A observe boundary accommodates a observe skew that will increase the angular distance between two (logical) sectors if the 2 sectors cross a observe boundary. The algorithm can then repeatedly break up the area into two and take a look at which half accommodates the observe boundary (binary search) till the precise location is discovered. There are a number of velocity optimizations on high of this fundamental algorithm. The algorithm additionally predicts the situation of the subsequent observe boundary (observe dimension tends to be fixed inside a serpentine and zone), and if the prediction is appropriate (which is the widespread case), the subsequent observe boundary will be confirmed with no binary search. Additionally, for giant areas, it splits the area into greater than two sub-regions as a result of we are able to take a look at a number of areas in the identical revolution.
The second “quick” algorithm is an extension of the primary algorithm, and is essentially primarily based on the “MIMD” (multiplicative enhance, multiplicative lower) algorithm proposed by Gim and Received [1], with enhancements to enhance robustness. As a result of observe sizes have a tendency to remain fixed for comparatively giant areas, it could be worthwhile to not simply predict the situation of the subsequent observe boundary, however to foretell that a number of upcoming tracks are all the identical dimension. If the prediction is appropriate, we double the prediction for the subsequent iteration, in any other case we halve the variety of skipped tracks and check out once more. The algorithm is basically binary-searching for zone/serpentine boundaries somewhat than observe boundaries, so it doesn’t want to go to each observe.
In my implementation, “quick” mode is restricted to predicting actual multiples of the earlier observe dimension (Gim and Received permit an error of ±5 sectors). If the prediction just isn’t precisely appropriate even for one observe forward, it falls again to the O(lg tracksize) algorithm described above to deal with the change in observe dimension. To confirm the prediction, I confirm that observe boundaries exist at each (n-1) and (n) tracks forward (Gim and Received solely checks (n) tracks forward). This was mandatory as a result of when observe dimension modifications, the change in observe dimension multiplied by the variety of tracks the algorithm selected to skip will be precisely a a number of of the earlier observe dimension, resulting in an incorrect end result. For instance, if the present observe dimension is 1000 sectors and the algorithm desires to examine whether or not 64 tracks (or 64000 sectors) forward continues to be throughout the identical zone, it will incorrectly conclude that the zone continues for 64 1000-sector tracks if the truth had been that there have been 80 800-sector tracks as a substitute. The chance of this occurring is low, however with tens of millions of tracks per disk, I routinely noticed just a few incorrectly-predicted blocks for that reason. Even doing two checks per prediction doesn’t assure correctness. For instance, if there have been two consecutive tracks that had been precisely half the dimensions of its neighbours (that is unlikely, however not unimaginable), each algorithms would incorrectly determine it as a single normal-sized observe.
In probably the most excessive instances (particularly, the blob of defects in one in every of my P300 drives), I needed to resort to plotting an angular place plot of each sector within the area and figuring out observe boundaries manually. There have been some tracks that had misplaced over 90% of its sectors, and the observe boundary discovering algorithms obtained hopelessly misplaced in these areas.
Listed below are two examples the place the track-finding algorithm fails. The primary instance illustrates difficulties when there are holes of lacking sectors as a consequence of defects. The animation beneath exhibits the results of working the observe boundary discovering algorithm ten occasions on a area of the disk with one giant gap and a few small holes scattered round. The color scheme is chosen to spotlight the primary few sectors of every observe in darkish blue. Each run produces a special end result because the algorithm usually errors the outlet for observe skew.
The second instance doesn’t contain defects. On the Samsung SV0432D, the final observe of every zone might not be an entire observe. On this instance, observe 8891 is in a zone with 392 sectors per observe, however the ultimate observe of the zone (8892) is simply 6 sectors lengthy. The following zone has 386 sectors per observe, which is precisely 6 fewer sectors than the earlier zone. An algorithm that appears for observe boundaries would discover one at exactly the anticipated location (392 sectors previous the tip of observe 8891), not realizing that it missed a observe boundary in between. This case was discovered by manually inspecting all the zone boundaries as a result of I observed very small tracks close to a number of the zone boundaries.
Observe pitch and bit density
Now that we all know the observe boundaries (and thus, the variety of tracks), it will be attention-grabbing to attempt to estimate the observe pitch, which is often measured in tracks per inch (TPI). Calculating the observe pitch requires realizing the bodily distance between the outer and inside tracks, which can’t be measured by timing alone. This, I exploit pictures of the exhausting drives to estimate the bodily dimension of the platters.
This desk exhibits the approximate measurements. Utilizing these measurements, we are able to calculate the common observe pitch (observe pitch can differ between the outer, center, and inside tracks). For the drives I examined, observe pitch ranges from 40 µm to 80 nm, or 650 TPI to 310,000 TPI.
Mannequin | Capability | OD | ID | Avg. tracks per floor | Sector Dimension | OD observe dimension | Avg. observe pitch | OD linear density | ||
---|---|---|---|---|---|---|---|---|---|---|
(mm) | (mm) | (bytes) | (sectors) | (nm/observe) | (TPI) | (nm/bit) | (kbpi) | |||
Seagate ST-157A 3.5″ |
44.7 MB | 87 | 43 | 560 | 512 | 26 | 40,000 | 650 | 2600 | 10 |
Maxtor 7405AV | 405 MB | 91 | 35 | 2666 | 512 | 123 | 10,500 | 2400 | 570 | 48 |
Seagate ST51270A | 1.28 GB | 91 | 35 | 5400 | 512 | 145 | 5200 | 4900 | 480 | 53 |
Seagate ST39140A | 9.1 GB | 91 | 35 | 9006 | 512 | 297 | 3100 | 8.2k | 240 | 100 |
Samsung SV0432 | 4.3 GB | 91 | 35 | 12230 | 512 | 403 | 2300 | 11k | 170 | 150 |
Seagate ST1 ST650211CF 1″ |
5 GB | 23 | 12.5 | 18891 | 512 | 335 | 280 | 91k(1) | 53 | 480(1) |
Toshiba MK8034GSX 2.5″ | 80 GB | 62 | 28 | 73212 | 512 | 891 | 230 | 109k | 53 | 480 |
Hitachi Deskstar 7K80 | 82 GB | 91 | 35 | 88138 | 512 | 1170 | 310 | 81k | 60 | 430 |
Western Digital SE16 | 250 GB | 91 | 35 | 86742 | 512 | 1116 | 323 | 79k | 63 | 410 |
Seagate 7200.9 | 160 GB | 91 | 35 | 140893 | 512 | 1452 | 200 | 128k | 48 | 530 |
Seagate 7200.11 | 320 GB | 91 | 35 | 164927 | 512 | 2464 | 170 | 150k | 28 | 900 |
Western Digital S25 WD3000BKHG 2.5″ |
300 GB | 62 | 36 | 128028 | 512 | 1926 | 100 | 250k | 25 | 1000 |
Seagate Cheetah 15K.7 ST3450857SS 3.5″ |
450 GB | 66 | 37 | 99308 | 512 | 1800 | 150 | 174k(2) | 28 | 900(2) |
Samsung SpinPoint F3 HD103SJ 3.5″ | 1 TB | 91 | 35 | 213534 | 512 | 2937 | 130 | 194k(3) | 24 | 1100 |
Hitachi 7K1000.C 3.5″ | 1 TB | 91 | 35 | 228718 | 512 | 2673 | 120 | 207k | 26 | 970 |
Toshiba P300 HDWD130 3.5″ | 3 TB | 91 | 35 | 345078 | 4096 | 485 | 80 | 313k | 18 | 1400 |
Toshiba X300 HDWE150 3.5″ | 5 TB | 91 | 35 | 327958 | 4096 | 500 | 85 | 298k | 17 | 1450 |
1 The Seagate ST1 sequence manual says 105,000 TPI (tracks per inch) max. It additionally says linear density is 651 kbpi max.
2 The Seagate Cheetah 15K.7 manual says 165,000 TPI. It additionally says linear density is 1361 kbpi max.
3 The HD103SJ manual says 245k TPI.
For the three drives the place the handbook lists a observe pitch quantity, two of my estimates present a considerably decrease observe density that the revealed quantity, by as much as 20%. I have no idea the explanation for the distinction, however attainable causes embody the handbook publishing the utmost density of the densest area of the disk whereas I compute the common, or that I’ve over-estimated the bodily space of the platter that’s truly used for knowledge (for instance, if the platter had been able to extra storage than what was productized).
I can even use the identical knowledge to compute an estimate of the linear bit density for a selected observe. Since we all know the variety of sectors per observe, we are able to calculate the bit density of the outer observe utilizing the estimate for the outer diameter. For instance, if the OD of the Toshiba X300 is 91mm and it has 500 sectors/observe, this results in about 57,300 consumer knowledge bits per millimeter, or 1 consumer knowledge bit per 17nm. The bodily bit density might be one other 15-20% better to account for area for the servo sectors, inter-sector hole, sync, knowledge tackle marker, error correction code, and line coding of the consumer knowledge.
For the 2 instances the place a linear density specification was listed, the specification is much greater (by as much as 50%). I can’t clarify most of this hole. Some attainable explanations embody the outer observe not having the very best linear density, or one thing to do with common density vs. most, which is understood to be considerably totally different [8].
Observe Structure and Variety of Recording Surfaces
In any case this work, we lastly have sufficient info to find out the observe structure and variety of recording surfaces (and platters) of the disks. The easy query of what number of platters a tough drive accommodates turned out to be surprisingly exhausting to measure.
The graphs beneath plot each the track size and seek profile charts with a shared x-axis. The graphs present solely the primary few thousand tracks. The observe dimension plot exhibits modifications within the observe dimension that point out when head or zone modifications happen, as a result of a head change can even change the observe dimension if the opposite recording surfaces has a special observe dimension. Modifications in observe skew (not plotted right here) will also be used to find head modifications, which is very helpful for drives that use the identical observe dimension on all surfaces. The search profile offers an approximation of the radial location of the observe. Tracks positioned nearest the periphery have decrease search occasions. This is identical methodology utilized by Gim and Received to seek out observe layouts [1].

Samsung HD103SJ: 4 recording surfaces. This disk switches surfaces sometimes (observe the x-axis scale), and head change coincides with zone boundaries.

Western Digital S25: 3 recording surfaces. Every surfaces is proven in a special color within the search profile.
The graph for the Samsung HD103SJ is pretty straightforward to learn. It exhibits 4 recording surfaces, every of which has a special observe dimension. Beginning at first of the disk, tracks are stuffed on one floor from exterior in direction of the within (search time will increase from observe 0 to trace 9235), then strikes to the outer observe of the subsequent floor. This sample repeats 4 occasions (indicating 4 surfaces) earlier than transferring additional inwards. The following group of 4 zones begins at observe 36009. Trying on the observe dimension (higher) subplot, we are able to see that the observe structure cycles via the 4 surfaces in the identical order: The third floor has the very best density. The idea right here is that the “high quality” can differ between recording surfaces, however that the standard of the identical surfaces modifications slowly. Areas of the identical dimension usually tend to be on the identical floor and adjoining than zones that differ in observe dimension. Additionally, all serpentines have tracks which can be ordered from exterior to inside. This drive thus makes use of a seek-first, ahead search order, ahead floor order structure (“search first, FF”).
The Toshiba X300 5TB drive has 10 recording surfaces, every having totally different observe density. The observe structure is totally different than the HD103SJ. Tracks are stuffed from exterior towards inside on the primary floor, however stuffed within the reverse route on the subsequent floor (alternating search order). The search profile seems like teams of 5 upside-down U shapes. The ten surfaces are cycled via in the identical order (the fourth floor has the very best density). This drive makes use of a “search first, AF” structure.
The Seagate 15K.7 (with 6 recording surfaces) seems to make use of the same observe structure because the X300, displaying upside-down U shapes within the search profile. Nevertheless, the observe dimension plot exhibits that it cycles via the surfaces in a special order. Every group of 6 serpertines are mirrored, which signifies surfaces 1 to six are used for the primary group of serpentines, adopted by surfaces 6 to 1 in reverse order (alternating floor order). This drive makes use of a “search first, AA” structure.
The Western Digital S25 has three recording surfaces. For readability, I colored every recording floor with a special color within the search time profile. The search profile seems irregular however the observe dimension plot monotonically decreases. It seems the technique used right here is to all the time have a monotonically-decreasing observe dimension by various the order through which bodily tracks are used. The few tracks which have barely fewer sectors than its neighbouring tracks is attributable to skipping over faulty sectors. The floor with the very best observe dimension is used first, with the primary 16208 tracks all positioned on this floor. The opposite two surfaces are used beginning at logical observe 16208, and returns to the primary floor close to observe 39600. This drive makes use of a seek-first, ahead search route structure, however no common sample within the floor order.
As talked about within the track boundaries section, the Toshiba MK8034GSX additionally has monotonically-decreasing observe sizes. As proven right here, the observe structure is definitely the identical because the Toshiba X300, besides with solely 3 surfaces (seek-first, AF). The explanation the observe sizes lower monotonically is as a result of all surfaces use the identical observe sizes and zone boundaries. In distinction, the WD S25 makes use of surfaces with totally different observe sizes, however the tracks are reordered to make logical observe sizes monotonically reducing.
Observe alignment throughout surfaces
The search profile exhibits an oblique measurement of the space between a reference level (sector 0) and different tracks on the disk. On drives with a number of recording surfaces, the plot will be discontinuous as a consequence of a serpentine-type structure, the place a non-adjacent observe (lots of of tracks away) is definitely bodily close by (minimal head motion) on a special floor.
One attention-grabbing method of decoding this info is to ask the query: How effectively are tracks on one floor aligned to tracks on totally different surfaces? In classical observe layouts, it was assumed that head switches (electrical) are quicker than an adjacent-track search (mechanical), so consecutive tracks are specified by cylinders, switching heads a number of occasions earlier than transferring the pinnacle to the adjoining cylinder. However with growing observe density, tracks could have develop into too tough to align on totally different surfaces.
For instance, the search profile for a 3 TB Toshiba P300 present that tracks will be poorly aligned between surfaces. Under is the search profile for the primary 1100 tracks the disk, which spans the primary 8 serpentines (on 6 surfaces). Within the search profile plots above, I used sector 0 because the reference level. Right here, I created an animation the place I diverse the reference level between observe 0 via observe 140 (spanning exery observe within the first serpentine on the primary floor). The reference location is indicated by the arrow. I additionally highlighted serpentine boundaries to make them simpler to see. As a result of there are 6 surfaces, there are 6 minimal peaks, one per recording floor. The placement of the peaks point out which logical tracks are bodily closest to the reference level. This enables seeing whether or not serpentines on totally different surfaces are aligned to one another, by seeing whether or not the height in every serpentine is in the identical relative location in each serpentine.
On this disk, the serpentines usually are not well-aligned. Logical observe 0 of the disk is bodily closest to trace 38 of the second serpentine, observe 26 of the third, observe 63 of the fourth, observe 20 of the sixth, and is 20 tracks earlier than the beginning of the fifth serpentine. This means that if we tried to arrange these serpentines into cylinders, a head change would possible additionally include a search of a number of tens of tracks as a consequence of misalignment between surfaces. With a mean observe pitch on this drive of about 80 nm, this misalignment is someplace round 2 µm. This knowledge doesn’t present what causes the misalignment. It may very well be variation within the radius of the servo observe initially written to the disks, or it may very well be the heads within the head stack meeting being not completely aligned, or maybe there merely was no try and carefully align them as a result of a seek-first observe structure is much less delicate to go change time.
This plot additionally exhibits {that a} head change alone (if minimal search is required) continues to be quicker than an adjacent-track search. The minimal peak positioned on the reference observe entails no search, whereas the 2 factors instantly earlier than and after it are single-track seeks. These single-track seeks take longer than the entry time to the physically-nearest observe on a special floor.
In conclusion, whereas it’s nonetheless true {that a} head change is quicker than an adjacent-track search, observe density has elevated a lot that it’s unimaginable to align tracks into cylinders, so a head change is essentially accompanied by a search a lot better than one observe away. This explains why all newer drives use some sort of seek-first structure.
Observe Skew
Observe skew is historically outlined because the angle between the beginning sector of 1 observe in comparison with the beginning sector of the earlier observe. I selected to measure the beginning sector of every observe relative to sector 0. Realizing the absolute begin place of every observe can present whether or not the skew is precisely a fraction of a revolution or differs barely. That is attention-grabbing as a result of I feel it offers some details about how the servo info was initially written to the clean floor, although I don’t know sufficient about servo writing to know its significance.

Toshiba X300, zoomed in to first 10% of the disk. Observe skew is barely lower than 1/16 revolution, drifting by about 28 revolutions over 3.28 million tracks (or about -0.003° per observe).
Every of the plots above have two sub-plots. The highest sub-plot exhibits the angular place of the start of every observe on the disk, whereas the underside exhibits the observe skew of every observe (the distinction in begin place between adjoining tracks). The 2 sub-plots are two views of the identical knowledge.
The primary plot above (Toshiba 3TB DT01ACA300) exhibits a drive that makes use of a skew of precisely 3/29 revolutions (~37.2°). The plot kinds a set of near-horizontal traces, drifting by about 2° over the whole disk. However why are there 58 horizontal traces as a substitute of 29? Adjoining surfaces have observe begin positions which can be shifted by an odd a number of of 1/58 revolutions (29/58 = 180°). The underside sub-plot exhibits that a lot of the observe skews are 37.2°, however that 180° can be widespread. The 180° skews happen at serpentine boundaries when the subsequent observe strikes to a special recording floor. There are additionally a small variety of skews of various values. I feel many of those are as a consequence of observe or sector slipping, the place the observe begin positions nonetheless observe the sequence of the true bodily tracks, however some tracks are unused and ignored.
The following plot (Toshiba X300 5TB) exhibits no horizontal bands within the observe skew plot as a result of its skew just isn’t precisely a small fraction of a revolution. The underside sub-plot exhibits that 22.5° (1/16 rev) and 202.5° (9/16 rev) are widespread skews. The third plot zooms in additional, displaying the primary 10% of the disk. Now it’s clear that the skew is barely lower than 1/16 revolutions. We are able to additionally see that the observe begin positions drifts by about 45/16 revolutions on this plot, or 28 revolutions over the disk. I don’t have a believable clarification for what causes this slight drift, nor what benefits/disadvantages it may need. I think it is determined by the tactic used to write down the servo info [9], however I do know little or no about servo writing.
The fourth plot exhibits the Seagate ST1 5 GB microdrive. It’s a 1-platter, 2-head drive, and makes use of a special skew on both sides! One facet makes use of 17/56 revolutions (109.3°) whereas the opposite makes use of 18/56 (115.7°).
Implementation Particulars
--access-list 0,30 < track_boundaries.txt
To measure observe skew, we have to know the beginning sector of every observe. The beginning sector of every observe makes use of the output of the observe bounds algorithm (previous section).
Though observe skew is usually outlined as the beginning of a observe relative to the tip of the earlier observe, this algorithm measures absolutely the angular place (i.e., relative to sector 0) of the beginning sector of each observe.
Faulty Sectors
With exhausting drives having tens of millions to billions of sectors, every solely tens of nanometers in dimension, it’s inevitable that there can be defects on the disk floor at manufacture time. Completely different drives take totally different approaches to handle these defects.
In some early drives, defects had been “managed” by notifying the consumer. For instance, the ST-157A handbook says “A label is mounted to the drive itemizing the situation of any media defects by cylinder, head and bytes from index”. The stickers that might be wanted for a contemporary drive with lots of of hundreds of defects can be too costly. With bigger disks and extra defects, the disk controller must silently skip over faulty sectors (at a tiny influence to efficiency). Two widespread strategies to skip over defects are sector slipping and observe slipping. Sector slipping permits mapping consecutive logical sector numbers to non-consecutive bodily sectors when skipping over faulty bodily sectors. When studying sequential logical sectors, faulty sectors causes a small delay of some sectors, however doesn’t require additional seeks. Observe slipping is analogous, besides that complete tracks are skipped.
How can the defect administration methodology utilized by every exhausting drive be detected? Sector slipping causes the observe size to be diminished by the dimensions of the variety of faulty sectors ignored. These will be pretty simply observed when wanting on the observe dimension plot. In a defect-free drive, all tracks in a zone are the identical dimension (straight horizontal line), however tracks with holes have a tendency to indicate up as a bunch of smaller tracks inside a zone. The area of curiosity can then be extra carefully examined by plotting the bodily angular place of each logical sector within the area. Faulty sectors which can be ignored seem as holes, and are particularly seen if the outlet spans a number of tracks on the identical angular place. As a result of media defects are a bodily (not logical) phenomenon, they trigger holes in teams of sectors positioned bodily close to one another, ignoring logical results comparable to observe skew.
Sector slipping
The figures beneath present instance sector-slipping holes for a number of drives. I’ve noticed sector slipping holes in a majority of the drives. On most disks, holes are rare (10s to 100s per disk?) and small (just a few sectors), however the Toshiba 3TB (P300 and DT01ACA300) drives appear to have extra holes and far larger holes with some holes spanning greater than 90% of a observe. These manufacturing defects don’t appear to have affected reliability, as none of my 4 drives have developed any new faulty sectors throughout use (between 13,000 and 30,000 power-on hours to date). The one actual impact is in diminished efficiency within the areas with holes, and that quantities to not more than lots of of hundreds of sectors out of 733 million.
Observe slipping
Observe slipping is tougher to seek out and much more tough to confirm. If a drive makes use of solely observe slipping, then all tracks in every zone accommodates the anticipated variety of sectors with no exceptions. Nevertheless, observe slipping could cause the dimensions of a serpentine to be smaller than anticipated. The best strategy to discover observe skipping is the existence of an uncommon observe skew inside a serpentine. Observe skew between adjoining tracks inside a serpentine tends to be fixed, with bigger skews occurring solely at zone/serpentine boundaries (to accommodate a head change or a bigger search). An uncommon skew inside a serpentine usually signifies some skipped tracks. Verifying that the weird skew is definitely attributable to skipped tracks is tough. Skipped tracks trigger a gap within the radial route solely (so measuring angular place doesn’t work). Radial distances will be measured utilizing search time (additional search → longer time), however as a result of the search distance to hunt time relation is extremely non-linear and varies for every drive mannequin, it’s tough to precisely depend the variety of skipped tracks.

Toshiba X300 observe slipping. This diagram exhibits three physically-adjacent serpentines on the identical floor facet by facet (thus, the x-axis is discontinuous). There seems to be round 263 lacking tracks (~22 µm) at observe quantity 514148, inflicting an unusually small serpentine (52 tracks as a substitute of 315) and an uncommon observe skew (177°) on the location of the skipped tracks. The search time profile is uneven due to the lacking tracks.

Seagate Cheetah 15K.7 observe slipping. Round 28 slipped tracks (~4 µm) between tracks 42033 and 42034. Making an attempt to measure the variety of skipped tracks utilizing an adjoining defect-free floor suggests 27 slipped tracks, however wanting on the sample of observe beginning positions (skew) suggests 28 lacking tracks.
The figures above present two examples of observe slipping and my try at counting the variety of skipped tracks. Within the first plot (Toshiba X300), there seems to be round 263 skipped tracks between logical tracks 514147 and 514148. The second plot (Seagate 15K.7) exhibits round 28 skipped tracks between (logical) tracks 42033 and 42034 (on “Floor B”).
The X300 plot exhibits three physically-adjacent serpentines on the identical floor, with the center one shaded blue. The logical observe numbers are discontinuous as a result of serpentines on the opposite 9 surfaces are omitted from the plot. Within the high pane, observe dimension is a continuing 440 sectors, which signifies that there is no such thing as a zone change. The center serpentine dimension is way smaller than the same old 315-track serpentines on this area of the disk. The 2 spikes are left over from the unedited plot the place observe dimension modifications when transferring to a logically-adjacent serpentine on a special floor, which has been minimize out. The second pane exhibits a search profile with a reference sector someplace inside the center serpentine. If these three serpentines contained solely physically-adjacent tracks, one would anticipate that the search profile can be symmetric, however there seems to be a giant part of the search profile curve minimize out between tracks 514147-514148. The third and fourth panes present observe skew (angular place of the beginning sector of every observe, and skew relative to the earlier observe), displaying that there’s a discontinuity to the skew sample on the identical location. All three observations level to a discontinuity after observe 514147, containing (315-52 = 263) bodily tracks which can be hidden.
The Seagate 15K.7 plot exhibits a special methodology for detecting lacking bodily tracks. We’ve already seen above that the search profile plot can be utilized to check radial positions between recording surfaces, which was used to look at the alignment of tracks between totally different surfaces. Right here, I exploit the identical methodology for a special function: I can measure bodily radial distances on one floor by counting tracks on a totally different floor, by assuming observe densities on the identical radius are the identical on totally different surfaces. On this plot, I present two adjoining serpentines on totally different surfaces (labeled Floor A and B within the plot), together with some suspected skipped tracks between observe quantity 42303 and 42304. The second and third panes of the plot present search profiles with the reference sector set to trace 42034 and observe 42033, respectively. The primary clue that there are lacking tracks is that each search profiles are uneven and seem to have been minimize off at tracks 42033-42034.
We are able to use search profiles to measure radial distances between totally different surfaces. The 2 search profiles present that observe 42034 on Floor B is bodily closest to trace 41964 on Floor A, whereas observe 42033 is closest to trace 41992. If there have been no skipped tracks, we’d anticipate that transferring the pinnacle a distance of 1 observe on Floor B would additionally transfer the pinnacle by roughly one observe on Floor A. As a substitute, a “one observe” distance on Floor B was truly a 28-track distance on Floor A, which suggests 27 lacking tracks on Floor B. Floor A (assumed defect-free) is getting used as a ruler to measure bodily distances on Floor B. Following the discontinuity of the skew sample (backside pane) between tracks 42033 and 42034 truly suggests there could also be 28 tracks lacking. On this explicit instance, I belief the skew sample extra, so I feel 28 lacking tracks is extra possible than 27, however I’ve no method of proving it.
Lacking bodily tracks are tough to detect and confirm, however I’ve used two examples to indicate that it’s attainable to determine lacking bodily tracks and the approximate variety of tracks lacking. Utilizing 4 impartial observations will increase confidence within the end result: serpentine dimension change in comparison with neighbours, uneven or cut-off search profile, utilizing one other recording floor as a ruler, and discontinuities within the observe skew sample.
Microbenchmarking Challenges
This part is a brief dialogue of a number of the challenges I encountered whereas designing this set of microbenchmarks.
Disk cache and lookahead have to be disabled
My microbenchmarks assume that learn instructions will truly be serviced by studying the disk, and never the cache. Some earlier work [2] tried to get round this through the use of writes as a substitute of reads and assuming writes should wait till knowledge reaches the disk (for security towards energy loss). There are customary instructions to disable caches on each ATA and SCSI disks, and these instructions had been practical on all however one of many drives I examined. The Maxtor 7405AV has a cache that can not be disabled, however it seems that the cache solely caches repeated reads to the identical sector, so it was straightforward to work round this behaviour. Thus, I may use reads and keep away from lots of the different issues with writes (avoiding greater settling occasions for writes, dropping the info on the disk, and by no means being fully sure of the precise time when knowledge reaches the disk media).
SAS controllers are available two sorts: HBA and RAID. Solely HBA will work.
A SAS RAID controller permits the OS to see the RAID array, however doesn’t permit sending SCSI instructions on to the disk drives. Meaning it’s unimaginable to show off the disk cache. A less complicated HBA (Host Bus Adapter) is required. A HBA merely attaches the disk to the system, very like I’m used to with IDE and SATA drives.
All microbenchmarks make assumptions in regards to the factor being measured. These assumptions don’t all the time maintain.
These assumptions are particularly more likely to be unsuitable if the factor you’re measuring is newer than your algorithm. The algorithms from earlier work [1, 2] typically didn’t work unmodified on all the large number of drives I examined. Certainly, even my algorithms don’t work flawlessly on all the drives, regardless of a whole lot of tuning effort, and I anticipate my algorithms to work even much less effectively for drives I haven’t seen earlier than.
The “Skippy” algorithm proposed by Nisha et al. (from 1999) [2] assumed that tracks are specified by cylinders utilizing one of many head-first observe layouts. As we’ve seen, practically all disks after round 2000 not used this type of observe structure. The newer algorithms from Gim and Received (2010) [1] typically nonetheless work, however a few of their optimizations don’t work reliably. Their zone-finding algorithm assumes that tracks inside a zone or serpentine are all the identical dimension and alter sometimes, however in trendy disks with sector slipping, observe sizes can change very regularly, which might produce an occasional incorrect end result. Their zone-finding algorithm skips over giant teams of tracks at a time, assuming the observe dimension doesn’t change after which verifies whether or not the goal continues to be a observe boundary, and concluding that the whole group of tracks nonetheless have the identical dimension. This fails when the whole variety of lacking sectors (e.g., as a consequence of sector slipping) in a bunch of tracks is precisely a a number of of the assumed observe dimension. This turns into extra widespread with larger drives (I examined as much as 5 TB, they examined as much as 320 GB) and drives with extra observe dimension variation.
My algorithms additionally make assumptions, and these can even fail:
- I assume a tough drive accommodates a stack of spinning disks, knowledge is saved in concentric tracks (not spirals) and sectors, and that there’s one head stack meeting the place one head can be utilized at a time. I’ve but to see a tough drive that may be a counter-example, however there have been the occasional suggestion (and I feel actual merchandise) of drives with two head stacks [10, 11, 12].
- My observe boundary algorithm assumes that tracks have observe skew, and that the observe skew is bigger than just a few p.c of a revolution and considerably lower than a full revolution. This assumption doesn’t maintain on the ST-157A I examined: it has no skew (or 360° skew). Since this drive solely makes use of one observe dimension over the whole disk, it was simpler to manually compute the observe boundaries than to invent a brand new algorithm to work with zero skew. Nevertheless, even on the opposite disks, this assumption doesn’t all the time maintain. Most disks have some tracks with near-zero skew, often because of zone modifications or observe or sector slipping inflicting just a few situations of bizarre skew. There are additionally assumptions that learn occasions should not have a lot random variation, or that disks should not have too many lacking sectors. In consequence, I wanted to manually clear up errors within the observe bounds found by the algorithm on all the drives.
- Disks that considerably deviate from the idealized behaviour trigger issues. For instance, my Seagate 7200.11 has many areas of the disk that take one additional revolution to carry out a search (search error?), whereas my Seagate ST1 has search errors at random.
- Equally, it’s tough to seek out observe boundaries in disk areas with tracks which have misplaced a lot of its sectors as a consequence of defects (two of the P300 drives), or the place there are giant modifications in observe dimension (Samsung SV0432D).
Conclusions
This text mentioned microbenchmarks for locating some bodily traits of how knowledge is saved on exhausting drives. The power to measure angular place of sectors and search time between tracks can be utilized to construct extra advanced measurements that permit figuring out the structure of sectors and tracks onto the disk platters. Along with the observe dimension and search profile utilized in earlier work, I additionally discovered that observe skews are helpful in figuring out the observe structure.
I examined exhausting drives from 45 MB to five TB, spanning 25 years of progress. This confirmed a greater variety of observe layouts and odd behaviours than seen in earlier work. The wide range of designs and nook instances additionally made the design of microbenchmark algorithms tougher.
The unique query I got down to reply was whether or not one may measure the variety of platters (or surfaces) in a tough drive. It seems it’s typically attainable, however there is no such thing as a simple algorithm to take action. Realizing the variety of surfaces requires realizing the observe structure, and the various totally different observe layouts require combining totally different measurements to deduce the end result.
Detailed Outcomes
This text obtained too lengthy for one web page. Measurement results for each drive are on Page 2.
Supply Code
References
![[doi]](https://blinkingrobots.com/wp-content/uploads/2024/02/1708418171_211_Discovering-Hard-Disk-Physical-Geometry-through-Microbenchmarking-Blog.png)
![[doi]](https://blinkingrobots.com/wp-content/uploads/2024/02/1708418171_211_Discovering-Hard-Disk-Physical-Geometry-through-Microbenchmarking-Blog.png)
Cite
- H. Wong, Discovering Laborious Disk Bodily Geometry via Microbenchmarking, Sept., 2019. [Online]. Obtainable: http://blog.stuffedcow.net/2019/09/hard-disk-geometry-microbenchmarking/
[Bibtex]@misc{disk-microbenchmarking, writer={Henry Wong}, title={Discovering Laborious Disk Bodily Geometry via Microbenchmarking}, month={sep}, 12 months=2019, url={http://weblog.stuffedcow.web/2019/09/hard-disk-geometry-microbenchmarking/} }