Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

Given new disk architectures, what are blocksize considerations?

0
10 Posted

Given new disk architectures, what are blocksize considerations?

0
10

SmartProduction has adopted IBM’s SDB (“system-determined block size”) standard when determining optimal data set block size. Today, SDB is clearly an “industry standard.” It is supported and promoted by several operating system components, by IBM products (such as DFHSMS and DFSort), and by non-IBM performance-oriented products (such as SyncSort and CA-Sort). The optimal block size for given device, data set or application is derived from several technical considerations. Following are some important considerations that apply to all kinds of disk devices and user applications. – Improved I/O Performance A larger block size reduces the number of actual I/O operations. As a result, CPU cycles are saved, and the application enters a wait state due to I/O activity less often. Parameters used for calculating the largest possible block size are: track capacity, maximum block size supported by the access method used, LRECL, REFCM. – Optimal Disk Space Usage For a given disk type and LRECL, a

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123