Advertisement
xdxdxd123

Untitled

Jun 17th, 2017
154
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 28.61 KB | None | 0 0
  1.  
  2. Contents
  3. 1.0 Introduction to the z13 System 4
  4. Overview of a System z13 System 4
  5. 1.1 A Frame and the Z Frame 6
  6. 1.1.1 A Frame 6
  7. 1.1.2 Z Frame 6
  8. 1.2 CPC Processor Drawers 7
  9. 1.2 .1 Components of a CPC Processor Drawer 7
  10. 1.2.2 z13 CPC Processor Nodes 8
  11. 1.2.3 z13 8-Core Processor Design 9
  12. 1.2.3 Processor Unit (PU) and Storage Control (SC) Processers 10
  13. 1.2.4 IBM Specialized Processor Units and Optimized instruction Sets 11
  14. 1.2.5 Spare and Capacity-on-Demand Processors 11
  15. 1.2.6 z13 - More memory makes a difference 12
  16. 1.2.7 IBM Flash Express 12
  17. 1.2.8 IBM zAware 12
  18. 1.2.9 Connecting CPC Drawers 13
  19. 1.2.10 Busses Z13 System and I/O 14
  20. 1.2.11 Graphical Summary of Processor Drawers, PUs and DDR3 RAIM 15
  21. 2.0 z/13 External I/O Connectivity and Storage 16
  22. 2.1 PCIe (Peripheral Component Interconnect Express) Drawer Connectivity 16
  23. 2.2 PCIe FICON Adaptors 17
  24. 2.3 Types of FICON Cables 18
  25. 2.4 PCIe Open Systems Adapter (OSA) 18
  26. 2.4.1 Graphical Example of an Open Systems Adapter Architecture 19
  27. 2.5 Enterprise Data Center and Cloud Connectivity 20
  28. 3.0 Z System Legacy Storage I/O Interfaces 21
  29. 3.1 Introduction to legacy I/O technologies and Overlapped Processing 21
  30. 3.1.1 What is an I/O Channel? 21
  31. 3.1.2 Overlapped Processing 22
  32. 3.2.3 What is Record Blocking? 23
  33. 3.1.4 Overlapped Processing and Multiprogramming 23
  34. 3.2 Identifying z System Legacy Physical Storage Devices. 24
  35. 3.2.1 CHPID (channel ID), switch port number, control unit address, and unit address. 24
  36. 3.2.2 IOCDS (Input/output Control Data Set) 24
  37.  
  38.  
  39. 1.0 Introduction to the z13 System
  40.  
  41. Overview of a System z13 System
  42.  
  43.  
  44.  
  45.  
  46.  
  47. Video – Introducing the IBM z13 System - http://www-03.ibm.com/systems/z/hardware/z13.html
  48. IBM z13 Mainframe - www.slideshare.net/DarrenDonaldson/ibm-z13-mainframe
  49. Z13 update - http://www.slideshare.net/StigQuistgaard/z13-update
  50. The IBM z13 – Server - http://www.vmworkshop.org/docs/2015/r9fgSpBF.pdf
  51.  
  52.  
  53. System z13 Mainframe Server has a significantly larger than a single blade server, desktop, laptop or any other computing device. Its physical footprint on a data center floor is approximately 114 square feet. However, ONE z13 mainframe sever may support 1) up to 8000 virtual Linux systems, 2) up to 24 usable processors (not including storage controller and other support processors), 192 cores and 141 usable cores, 3) 10 Terabytes of System Memory, 4) 16GPbps I/O connections, and 5) a EAL Level 5+ security rating.
  54.  
  55. For extremely large processing requirements, 32 System z13 mainframes may be clustered (combined) using IBM's Parallel Sysplex technologies. For lower processing requirements, virtual z/OS public cloud-based systems are available at a cost-efficient price point. Available data provides evidence that z System Cloud-based solutions provides a lower cost of ownership once a data center processing requirements exceed ten Linux or Windows-based servers.
  56.  
  57.  
  58.  
  59. IBM announced the new z13, billed as the most powerful and sophisticated computer system in the world. The z13 is the result of 5 years of development at a cost of over $1 billion. Using the world's fastest processor to deliver high speed data encryption and embedded analytics, the z13 is aimed at fulfilling the needs of the new mobile app economy, capable of processing 2.5 billion transactions a day, or the equivalent of 100 cyber Mondays.
  60.  
  61. • The z13 microprocessor is 2X faster than the most common server processor, and the z13 has 300 percent more memory and 100 percent more bandwidth and vector processing power than was previously available.
  62.  
  63. • As each mobile transaction can trigger from four to 100 additional system interactions in what is known as the “starburst effect”, companies need not only supply the processing power and bandwidth to handle these interactions, but also ensure the security of each interaction point. The IBM z13, combined with the IBM MobileFirst Platform, can provide 2X the encryption speed of previously available systems for safer mobile business transactions.
  64.  
  65. • The z13 embedded analytics capability has the power to perform real-time analytics to not only assist in fraud protection, but to assist businesses in recognizing buyer purchasing habits, facilitating customer loyalty programs, and identifying cross-sell and upsell opportunities as they arise.
  66.  
  67. • The IBM z13 supports Hadoop for analysis of unstructured data as well as the DB2 BLU for Linux in-memory database, giving the z13 platform Big Data analytics capabilities.
  68.  
  69. • The z13 can provide the basis for a company’s cloud architecture, capable of running up to 8,000 virtual servers and over 50 virtual servers per core. The z13 is based on open standards, with support for Linux and OpenStack. According to IBM’s internal tests, the z13 can lower the cost of running cloud by almost 50 percent compared to an x86/distributed server environment with a 30 percent increase in performance.
  70.  
  71. • IBM will also preview a new version of the z/OS software that will offer enhanced in-memory analytics and enhanced analysis of mobile transactions.
  72.  
  73. • Since modern blade servers, desktop and laptop computers are historically based on the IBM Mainframe design, the hardware components of a z System server is very similar. The following is a summary of the physical components of a z System Server.
  74.  
  75.  
  76.  
  77.  
  78.  
  79. 1.1 A Frame and the Z Frame
  80.  
  81. A typical data center has storage racks to store blade servers, storage servers, routers, switches, etc. For example, a blade server rack may contain two to eight servers. The height of each blade server is measured in units call a U unit. A U Unit is 1.75 inches in height. Therefore a 4U blade server would be seven inches height.
  82.  
  83. The z13 architecture are housed into one or two storage frames called the A Frame and the Z Frame. These storage frames are similar to the blade server racks. While multiple blade servers may be housed in one rack only one z13 system mainframe server may be stored in one A Frame and one Z Frame.
  84.  
  85. 1.1.1 A Frame
  86.  
  87. The A Frame will contain the processors, system and cache memory stored in Processor or CPC Drawer. Since the z13 System generates a significant amount of heat, which must be cooled by water cooling units similar to how a radiator works in a car. Water z13 enters the mainframe at approximately 40 degrees and exits and recycled within 4 seconds at approximately 70 degrees. In spite of the generated heat, the z13 system is twice as energy efficient as compared to other hardware architectures at similar workloads. .
  88.  
  89. 1.1.2 Z Frame
  90.  
  91. The z Frame will contain PCIe I/O Drawers, optional battery backup (UPS), Bulk Power Regulators (surge protection), and display and keyboards. The System z13 display and keyboard is normally used during initial installation, significant upgrades, and to provide a more secure direct operator connection to the z13 server. Z13 operators and system programmers are also provided the capability to being manage remotely.
  92.  
  93. 1.2 CPC Processor Drawers
  94.  
  95. The previous System z generation, i.e., the z12 mainframe, stored processors, system and cache memory, in a container called a "processor book", which appeared similar to a desktop computer. The A Frame could vertically mount four processor books.
  96.  
  97. The z13 System has replaced the concept of a processor "book" with a process "drawer" which is mounted horizontally. While the number of z13 processor is currently limited to four processor drawers, the newer z13 horizontal mounted processor drawer design will enable the future System z platform designs to introduce an increased number of processor/memory drawers.
  98.  
  99. 1.2 .1 Components of a CPC Processor Drawer
  100.  
  101.  Supports up to six (6) usable processors, each with 8 processor cores.
  102.  Supports up to 3,200 GB of RAIM (Redundant Array of Independent Memory) Works like RAID 5 redundancy, except applied to system memory.
  103.  Supports two Storage (memory) Controller processors and other support processors
  104.  480 MB of L4 Cache.
  105.  
  106.  
  107.  
  108.  
  109. 1.2.2 z13 CPC Processor Nodes
  110.  
  111. Each Processor Drawer is divided into two (2) Logical Processor Nodes. If you review the previous CPC Processor Drawer graphic, the Logical Node contains one-half of the processor resources. The previous z12 processor book design did not include the concept of a node.
  112.  
  113.  
  114.  
  115. While the concept a Processor node is considered logical, the physical advantages of the processor node design are 1) increased water-cooling flow, 2) better isolation and increased speed of system buses, 3) more I/O busses, and 4) greater ability to managed LPARs, Type 1 hypervisors (to be discussed later).
  116.  
  117.  
  118. 1.2.3 z13 8-Core Processor Design
  119.  
  120. Each z13 Physical Processor contains 8 processor cores, label 0 through 7.
  121.  
  122.  
  123.  
  124.  
  125.  
  126.  
  127.  
  128. 1.2.3 Processor Unit (PU) and Storage Control (SC) Processers
  129.  
  130. A computer drawer consists of six Processor Unit (PU) chips and two Storage Controller (SC) processor chips.
  131.  
  132. Each PU can be characterized a Central Processor (CP) which is designed to support a variety of specialized workloads, such as z/OS, Linux, DB2, and Parallel Sysplex clustering. z13 PUs provided the fastest performance in the world. The difference significantly increases when specialized tasks are considered. Intel, AMD, and other processors are not specialized by workloads. The same processor used for consumer social media workloads will be used for virtualization and analytic workloads.
  133.  
  134. Each PU may have six, seven or eight cores. Each PU has a private on board multi-channel DDR3 RAM memory controller to access system memory, supporting a RAID-like configuration to recover from memory faults. Each core will include a private L1, L2, and L3 storage.
  135.  
  136. The design of a Storage Controller Processor controls 1) L4 cache which are shared by a set of 3 PUs in each drawer, and 2) communication to processors in other drawers.
  137.  
  138.  
  139.  
  140. 1.2.4 IBM Specialized Processor Units and Optimized instruction Sets
  141.  
  142. The z13 MCM can support up to six physical. The z13 CP processor type supports a processor design and instruction set to optimize performance for the z/OS operating system workloads. While other processor manufacturers design and produce a general purpose instruction set, IBM supports multiple processor core types.
  143.  
  144. PU Processor Types Instruction Set Design
  145. CP Central Processor Core designed for z/OS workloads. Each IBM System z Mainframe must have at least one CP processor. CP supports a general-purpose instruction set.
  146. IFL Integrated Facility for Linux (IFL) is an IBM mainframe and Power Systems (RISC) processor on IBM Bladed Servers dedicated to running the Linux operating system. On System z, it can be used with or without z/VM. IFLs are one of three types of IBM mainframe processors expressly designed to reduce software costs, but by itself does not necessarily increase Linux or Java performance. The CF processor type may improve Linux and Java performance through Parallel Sysplex cluster computing or improve the ability to support a larger number of Linux virtual machines, e.g., up to 800 per mainframe.
  147. CF Coupling Facility Core that runs in its own LPAR along with a dedicated physical CP, which supports Parallel Sysplex. With Parallel Sysplex one can enable a z/OS LPAR, a zVM z/OS image, or a remote z/OS mainframe to act as a single z/OS system image. Parallel Sysplex combines data sharing and parallel computing to allow a cluster of up to 32 systems to share a workload for high performance and high availability.
  148.  
  149. The CF (coupling processor) manages the communication channels which shares messages between independent zOS systems to act as one image. The CF can cache common information, serialize messages and lock shared resources to ensure data integrity
  150. zIIP System z Integrated Information Processor (zIIP) is a special purpose processor. It was initially introduced to relieve the general mainframe central processors (CPs) of specific DB2 processing loads, e.g., parallel queries, improved DB2 support for JDBC, OBDC, and XML Processing (web services)
  151. zAAP IBM System z Application Assist Processor (zAAP) provides accelerated performance for ava (ODBC) and XML workloads (web services) under z/OS. This processor core instruction set was integrated into the zIIP processor in the newer z13 architecture.
  152.  
  153. 1.2.5 Spare and Capacity-on-Demand Processors
  154.  
  155. Each Processor Drawer can optionally specify one-or-more spare processors. A z13 System mainframe must have at least one spare processor. When z/OS detects a potential failure of an active processor, it can automatically activate the spare processor and switch the workload from the failing/failed processor to increase reliability. Hardware component reliability and fail-over is designed into all z13 hardware components.
  156.  
  157. One alternative to improve the scalability of workloads is to include a Capacity-On-Design designated processor. COD processors can be dynamically activated or de-activated due to unpredictable fluctuations in workloads compared to more permanent Customer-initiated Processor Upgrades.
  158.  
  159. 1.2.6 z13 - More memory makes a difference
  160.  
  161. Up to 10 TB available memory
  162.  
  163. 1. Transparently support shrinking batch windows and meet service goals – no change to applications needed to gain benefits
  164. 2. Significant performance benefits can be experienced by keeping data in memory and eliminating database I/O operations, e.g., DB2 buffer pools
  165. 3. Online transaction processing can experience up to 70% reduction in response time with more memory
  166. 4. Less CPU cycles and charges used for virtual paging
  167. 5. Improve system performance, minimize constraints and simplify management of applications with database middleware exploitation of additional memory
  168. 6. Improve real-time or in-transaction data analytics decision making with more memory.
  169. 7. Improves real to virtual ratio that allows deployment and support for more Linux workloads
  170.  
  171.  
  172. 1.2.7 IBM Flash Express
  173.  
  174. IBM zEC12 zAware and Flash Express - http://www.slideshare.net/mlsonslideshare/ibm-zec12-zaware-and-flash-express-14433375
  175.  
  176. IBM introduced solid state Flash Express memory cards in the z12 architecture. Solid state drives (SSD) have been available, but performance is limited to access speeds of external I/O bus technologies. Flash express cards are installed inside the z12 and z13, which may access Flash Express memory by internal busses.
  177.  
  178. Disk storage is accessed in terms of milliseconds. Internal DDR3 (Redundant Array of Independent Memory (RAIM) is accessed in nano seconds. Flash Express memory may be accessed in microseconds. IBM Flash memory is somewhat comparable to connecting your USB thumb drive directly to the system member bus rather than connecting it with an external USB port.
  179.  
  180. In addition, IBM Flash memory is less expensive as compared to DDR3 RAIM memory, and will improve performance for virtual paging, DB2 and Java buffers as compared to external solid state drive arrays, and improved reliability. Higher performance, lower risk at a lower cost.
  181.  
  182. 1.2.8 IBM zAware
  183.  
  184. IBM System z Advanced Workload Analysis Reporter was designed to help troubleshoot system problems, analyze vast amounts of data, and to reduce system recovery time. zAware is a self-learning, integrated analysis and reporting system that provides insights in near real-time insights of the behavior of a z System, middleware, and third-party products. zAware can analyze trends and diagnose intermittent problems across Sysplexes.
  185.  
  186. The z12 and z13 architectures are designed to provide improved diagnostic data. A zAware host partition can analyze client LPARs, Sysplexes and product client-zAware systems.
  187. 1.2.9 Connecting CPC Drawers
  188.  
  189. An IBM z13 Mainframe Server can contain up to four (4) hardware CPC Processor Drawers which may be connected into one SysPlex. The key hardware requirement to combine process drawers into a SysPlex architecture is the ability to interconnect the drawers. PCI-express technologies and optical fiber optic cabling are the supporting technologies.
  190.  
  191.  
  192.  
  193.  
  194. 1.2.10 Busses Z13 System and I/O
  195.  
  196. Busses are interconnections (wires) between processors, system memory and I/O Devices.Review this following graphic.
  197.  
  198.  
  199.  
  200.  
  201.  
  202.  
  203.  
  204.  
  205.  
  206.  
  207.  
  208.  
  209. Each Node contains up to three Processor Units, 1 Storage Control Unit and 480 MB of L4 cache. There are two Nodes in a Drawer. Processor Speed is limited to 5Ghrtz
  210.  
  211. Total System Memory is Limited to 10 TB. 20% of each DIMM (Dual inline Memory Module) is used for error recovery ECC (Error Correcting-code) Memory
  212.  
  213. 1.2.11 Graphical Summary of Processor Drawers, PUs and DDR3 RAIM
  214.  
  215.  
  216.  
  217.  
  218. 2.0 z/13 External I/O Connectivity and Storage
  219.  
  220. 2.1 PCIe (Peripheral Component Interconnect Express) Drawer Connectivity
  221.  
  222. When you take a closer look at the 'under the covers' picture, you can see that IBM provides two kinds of I/O drawers. An IO Drawer provides slots like a personal computer that you can add I/O connectivity cards. There's the 'traditional' or 'legacy' I/O drawer, which will support older connections, e.g., bus and tag and ESCON (Enterprise System Connection). The newer PCIe I/O drawer supports I/O (Input – Output System) Subsystem Internal Bus connectivity, as well as external InfiniBand to PCIe interconnectivity. PCIe interface cards provide connectivity to IBM Proprietary channel-to-control unit FICON (Fibre Connection) fiber optic cabling infrastructure.
  223.  
  224.  
  225.  
  226.  
  227. 2.2 PCIe FICON Adaptors
  228.  
  229. A fiber connection (FICON) is a fiber optic channel technology that increases capacity and lowers the cost of enterprise system connection (ESCON). FICON is often used with IBM 64-bit mainframe z/Architecture and Geographically Dispersed Parallel Sysplex (GDPS), as well as a number of mainframes supporting fiber channel protocol (FCP) via a small computer system interface (SCSI) command set over fiber channel.
  230.  
  231. IBM FICON Express16S supports 2, 4, 8 or 16 gigabits per second data rates at distances up to 100 km, and is capable of multiple concurrent data exchanges (a maximum of 32) in full duplex mode. FICON has replaced ESCON. Cisco and other companies do support FICON Connections and cabling infrastructures.
  232.  
  233. PCIe FICON Adaptor
  234.  
  235.  
  236. PCIe Adaptors have three technology components:
  237.  
  238. 1. Internal physical interfaces used to connect and communicate to a system drawer or z13 PCIe Drawer,
  239. 2. External physical interfaces used to connect and communicate with external wired and optical cable, and
  240. 3. PCIe interface technologies and protocols used to communicate with external devices and transmission speeds.
  241.  
  242. IBM PCIe Express technologies are used to communicate with external IBM hardware, whereas, IBM OSA (Open Systems Adapter) PCIe technologies are used to communicate with Non-IBM hardware.
  243.  
  244. Fanout PCIe adaptors provides embedded internal switch technologies which will reduce wiring and cabling in highest density, data center I/O networks. 
  245.  
  246. 2.3 Types of FICON Cables
  247. There are many different types of fiber optic connectors available on the market. The most common ones used in computer room environments are shown below. IBM today only uses ESCON- Duplex and SC-Duplex (FCS) connectors on channels and adapters.
  248.  
  249.  
  250.  
  251. 2.4 PCIe Open Systems Adapter (OSA)
  252.  
  253. PCIe OSA supports Ethernet, Token Ring and FDDI (Fiber Distributed Data Interface) connectivity to enterprise data center communications with third-party storage arrays, external switches, e.g., Cisco, and other servers.
  254.  
  255.  
  256.  
  257.  
  258.  
  259.  
  260.  
  261.  
  262.  
  263.  
  264.  
  265.  
  266.  
  267.  
  268.  
  269.  
  270.  
  271. 2.4.1 Graphical Example of an Open Systems Adapter Architecture
  272.  
  273.  
  274.  
  275.  
  276.  
  277.  
  278. 2.5 Enterprise Data Center and Cloud Connectivity
  279.  
  280. Fibre Channel - https://en.wikipedia.org/wiki/Fibre_Channel
  281. Fibre Channel Storage Area Networks (SAN) - http://www-03.ibm.com/systems/storage/san/
  282. Fibre Channel switch - https://en.wikipedia.org/wiki/Fibre_Channel_switch
  283.  
  284. Video - Fibre Channel vs Ethernet - https://www.youtube.com/watch?v=Ac8824tFS8c
  285. Video - Fibre Channel vs iSCSI - https://www.youtube.com/watch?v=QPcx-aW3Vf0
  286.  
  287.  
  288. Graphical Example pf Enterprise Data Center and Cloud Connectivity
  289.  
  290.  
  291.  
  292.  
  293. 3.0 Z System Legacy Storage I/O Interfaces
  294.  
  295. 3.1 Introduction to legacy I/O technologies and Overlapped Processing
  296. There are three architectures that can be used to connect Disk and other storage devices, e.g., Direct Access Storage Devices (DASDs), tapes and printers, to a computer: 1) Directed Attached Storage (DAS), 2) Network Attached Storage (NAS), and 3) Storage Area Networks (SAN). The following diagram illustrates three legacy storage architectures used for Direct Attached Storage (DAS).
  297.  
  298.  
  299.  
  300. 3.1.1 What is an I/O Channel?
  301.  
  302. Mainframes introduced the I/O Channel architecture to support overlapped processing and multiprogramming.
  303.  
  304.  
  305. 3.1.2 Overlapped Processing
  306.  
  307. Assume that a computer was to input payroll data stored on tape and then output a payroll transaction record to a second tape device. Review the timing sequence when the CPU was required to process the input, payroll calculations and output tasks.
  308.  
  309. Non Overlapped Processing
  310. No Channels or I/O Processors
  311. Performed by Processing Step Time Frame 1 Time Frame 2 Time Frame 3 Time Frame 4 Time Frame 5 Time Frame 6
  312. CPU Read Record
  313. from Tape 1 Record 1 Record 2
  314. CPU Calculate Payroll Record 1 Record 2
  315. CPU Write Payroll Transaction on Tape 2 Record 1 Record 3
  316.  
  317. The CPU was required to perform all input, calculation and output processing steps. In the first time frame the CPU would read Record 1 from the Tape 1. In the second time frame the CPU will perform the payroll calculations for Record 1, and in the third time frame the CPU would output the modified Record 1 to Tape 2. Given the relative difference in performance between CPUs and Tape devices in the 1960s, the CPU would be idle 98% of the processing time waiting for the slower tape device. Non overlapped systems were I/O Bound, i.e., the CPU was underutilized.
  318.  
  319. While the previous picture seems to imply that an I/O channel is a connector, an I/O Channel is actually a specialized I/O processor designed to work with the CPU. Modern I/O architectures rarely use the terminology of a channel any more, but every network card, SCSI card, USB root hub, etc. has a specialized I/O processor that performs the same tasks introduced by IBM in the 1960s.
  320.  
  321. Now review the timing sequence when the CPU and two channels process the input, payroll calculations and output tasks.
  322.  
  323. Overlapped Processing
  324. A CPU and 2 I/O Channels
  325. Performed by Processing Step Time Frame 1 Time Frame 2 Time Frame 3 Time Frame 4 Time Frame 5 Time Frame 6
  326. Channel 1 Read Record
  327. from Tape 1 Record 1 Record 2 Record 3 Record 4 Record 5 Record 6
  328. CPU Calculate Payroll Record 1 Record 2 Record 3 Record 4 Record 5
  329. Channel 2 Write Payroll Transaction on Tape 2 Record 1 Record 2 Record 3 Record 4
  330.  
  331. In the first time frame the Channel 1 would read Record 1 from the Tape 1. In the second time frame the Channel would read the Record 2, while the CPU will perform the payroll calculations for Record 1. In the third time frame, Channel 1 would read Record 3, the CPU will perform the payroll calculations for Record 2, and Channel would output the modified Record 1 to Tape 2. Sounds great? But, one should ask if the tape device (or any storage device) is so slow that the CPU will have to wait for the slower Channel 1 and Tape 1.
  332.  
  333. 3.2.3 What is Record Blocking?
  334.  
  335. When one allocates a data set, they must specify the data length of logical record and the block size (or blocking factor). The length of the logical record is easy to understand. The payroll department needs 200 characters to process a payroll transaction; therefore the record length should be 200 characters.
  336.  
  337. But, consider the effects if one can tell Channel1 to read three records at a time, instead of one record at a time. This would provide more efficient I/O processing. Instead of making three separate one trips to the pizza shop for each pizza, simply purchase three pizzas per visit at a time. Reading or writing multiple records at a time is described as blocking records. Consider overlapped processing with a block factor of 3 200-byte records or a blocking size of 600 bytes.
  338.  
  339. Overlapped Processing
  340. Read and Write Three Records at a Time
  341. A CPU and 2 I/O Channels
  342. Performed by Processing Step Time Frame 1 Time Frame 2 Time Frame 3 Time Frame 4 Time Frame 5 Time Frame 6
  343. Channel 1 Read Records
  344. from Tape 1 Records 1, 2 and 3 Records 4, 5 and 6
  345.  
  346. CPU Calculate Payroll Record 1 Record 2 Record 3 Record 4 Record 5
  347. Channel 2 Write Payroll Transaction on Tape 2 Records 1, 2 and 3
  348.  
  349.  
  350. What determines the most efficient Block size?
  351.  
  352. The most efficient blocking size is determined by the logical record length and the specific hardware characteristics of the storage device.
  353.  
  354.  
  355. 3.1.4 Overlapped Processing and Multiprogramming
  356.  
  357. The use of I/O channels to provide overlapped processing and the use of an appropriate block size does not solve all limitations of I/O bound systems. The potential for an idle processor, or underutilizing process capacity always exists. However, multiprogramming architectures are designed to automatically switch to another job or task when the CPU detects an idle condition waiting for an I/O device. Batch processing systems requires no user intervention and can be scheduled at non-peak processing times. The reality is that most large scale systems (not your laptop) are most often process bound, i.e., too many tasks and input and output requirements. Other strategies to address the limitations of process bounds systems include increased processor speed, increased RAM to decrease virtual thrashing, the use of multicore processors, multiprocessing and cluster architectures.
  358.  
  359. 3.2 Identifying z System Legacy Physical Storage Devices.
  360.  
  361. 3.2.1 CHPID (channel ID), switch port number, control unit address, and unit address.
  362.  
  363. Every operating system provides a system to physically identify an I/O device and to logically identity the device. For example, Windows may physically identify an I/O device by an interrupt, I/O address and a DMA channel. However, users and applications rarely identify an I/O device with those physical attributes. Being more user-friendly a Windows user may identify a disk as Drive C: or an attached printer as LPT1:
  364.  
  365. z/OS may also physically identify a device by using a CHPID (channel ID), switch port number, control unit address, and unit address. Remember a control unit may have many different devices connected to it. In PC-land, a SCSI control may have15 disk drives attached to it; each identified by a unique SCSI ID. Likewise an IBM 3990 control unit may have many disk drives attached to it; each identified by a unit address.
  366.  
  367. Since a z/OS Physical I/O address is complicated by static JCL, commands, user interfaces, and error messages use device numbers. A Device Number is a unique combination of four hexadecimal numbers, e.g., 183F. In theory, a Device Number is easier to understand than the use of a Physical I/O address. But there is another advantage. Assume that an existing JCL statement refers a given Processor Unit number, e.g., a DASD (Direct Access Storage Device) unit number, instead of a Device Number. Suppose that this unit number changes. Then all of the JCL statements referring to that unit number also need to be changed or rewritten. On the other hand, if the JCL contained a logical Device Number that refers to a Physical I/O address, the Device Number would not have to be changed on the JCL when the Physical I/O address changes. This is exactly how Internet domain names are associated with an IP address. If Google decides to change its IP address, users do not care - since google.com continues to work.
  368.  
  369. 3.2.2 IOCDS (Input/output Control Data Set)
  370.  
  371. Similar to DNS uses logical domain name which is mapped to an IP address, z/OS uses logical a Device Number which is mapped to a Physical I/O address. An I/O control file, e.g., IOCDS (Input/output Control Data Set) or IODF (Input/output Definition File), maps the Physical I/O address to the Device Number. During the boot process, more appropriately called IPL (Initial Program Load), this IODF data set is read and stored in system memory as collection of Unit Control Blocks (UCB).
  372.  
  373. System programmers will use device numbers more than application programmers. When an application needs access to a data set it will need to provide the name of the data set and the device number. Given the large number of device numbers and the possibility that the device number may be changed, another feature will be used. Application programmers may rely on catalog data sets. When a data set is cataloged, the device number and disk volume is stored. As long as the data set is assigned a unique name z/OS looks up the data set name in the catalog to determine the device number of the DASD and volume name.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement