Fiction Ignou Mcs-012 Book


Sunday, August 18, 2019

eGyanKosh · Indira Gandhi National Open University(IGNOU) · MCS Computer Organisation and Assembly Language Programming Community. Read MCS Computer Organization And ALP book reviews & author details and more at Still ignou books required with this help book. Read IGNOU MCS Computer Organisation And ALP book reviews & author details and more at Free delivery on qualified orders.

Ignou Mcs-012 Book

Language:English, Spanish, Indonesian
Genre:Business & Career
Published (Last):15.04.2016
ePub File Size:15.44 MB
PDF File Size:15.12 MB
Distribution:Free* [*Regsitration Required]
Uploaded by: DONG

View / Download MCS Computer Organisation and Assembly Language Thanks, Its Very Important for IGNOU students who are looking books and study . IGNOU MCS Study Materials/Books – If you come to this page for download each course of MCS Study Material so you are come at. IGNOU BCA all semester (1st semester,2nd semester,3rd semester,4th semester, 5th semester and 6th semester) book or block new revised syllabus study.

Sanjeev K. Om Vikas, Sr. Director, MIT Prof. Grover, Sr. Subrahmanyam Shri P. Venkata Suresh.

For example, if memory read cycle takes ns and a cache read cycle takes 20 ns, then for four continuous references, the first one brings the main memory contents to cache and the next three from cache. The basic characteristic of cache memory is its fast access time.

MCS012 Computer Organization And ALP (IGNOU Help book for MCS-012 in English Medium)

Therefore, very little or no time must be wasted when searching for words in the cache. The transformation of data from main memory to cache memory is referred to as a mapping process. The mapping procedure for the cache organization is of three types: Let us consider an example of a memory organization as shown in Figure 15 in which the main memory can store 32K words of 12 bits each and the cache is capable of storing blocks each block in the present example is equal to 24 bits, which is equal to two main memory words at any time.

Cache Memory. For every word stored in cache, there is a duplicate copy in the main memory. The CPU communicates with both memories. If there is a hit, the CPU uses the relevant 12 bits data from 24 bit cache data.

If there is a miss, the CPU reads the block containing the relevant word from the main memory. So the key here is that a cache must store the address and data portions of the main memory to ascertain whether the given information is available in the cache or not. However, let us assume the block size as 1 memory word for the following discussions. Associative Mapping. The most flexible and fastest cache organization uses an associative memory which is shown in Figure The associative memory stores both the address and data of the memory word.

This permits any location in cache to store any word from the main memory. The address value of 15 bits is shown as a five-digit octal number and its corresponding 12 bits word is shown as a five digit octal number. A CPU address of 15 bits is placed in the argument register and the associative memory is searched for a matching address. If the address is found, the corresponding 12 bits data is read and sent to the CPU.

If no matches are found, the main memory is accessed for the word. The address-data pair is then transferred to the associative cache memory. This address checking is done simultaneously for the complete cache in an associative way. In the general case, there are 2k words in cache memory and 2n words in the main memory. The n-bits memory address is divided into two fields: The direct mapping cache organization uses the n-bit address to access the main memory and k-bit index to access the cache.

The internal organization of the words in the cache memory is as shown in Figure Each word in cache consists of the data word and its associated tag. When a new word is first brought into the cache, the tag bits are stored alongside the data bits. When the CPU generates a memory request, the index field is used for the address to access the cache. The tag field of the CPU address is compared with the tag in the word read from the cache. If the two tags match, there is a hit and the desired data word is in cache.

If there is no match, there is a miss and the required word is read from the main memory. Let us consider a numerical example shown in Figure Suppose that the CPU wants to access the word at address The index address is , so it is used to access the cache. The two tags are then compared.

The cache tag is 00 but the address tag is 02, which does not produce a match. Therefore, the main memory is accessed and the data word is transferred to the CPU. The cache word at index address is then replaced with a tag of 02 and data of A third type of cache organization called set-associative mapping is an improvement on the direct mapping organization in that each word of cache can store two or more words of memory under the same index address.

Each data word is stored together with its tag and the number of tag data items in one word of cache is said to form a set. Let us consider an example of a set-associative cache organization for a set size of two as shown in the Figure Each index address refers to two data words and their associated tags.

An index address of nine bits can accommodate words. In general, a Set-Associative cache of set size K will accommodate K-words of main memory in each word of cache. The main problems associated in writing with cache memories are: Figure The contents of cache and main memory can be altered by more than one device. This can result in inconsistencies in the values of the cache and main memory. In the case of multiple CPUs with different cache, a word altered in one cache automatically invalidate the word in the other cache.

Write the data in cache as well as main memory. The other CPUs - Cache combination has to watch with traffic to the main memory and make suitable amendment in the contents of cache. The disadvantage of this technique is that a bottleneck is created due to large number of accesses to the main memory by various CPUs.

In this method updates are made only in the cache, setting a bit called Update bit. Only those blocks whose update bit is set is replaced in the main memory. An instruction cache is one which is employed for accessing only the instructions and nothing else.

The advantage of such a cache is that as the instructions do not change we need not write the instruction cache back to memory, unlike data storage cache. When a program is loaded into the main memory, its successive instructions are stored in successive memory modules. Now during the execution of the program, when the processor issues a memory fetch command, the memory access system creates n consecutive memory addresses and places them in the Memory Address Register in the right order.

Since the instructions are normally executed in the sequence in which they are written, the availability of N successive instructions in the CPU avoids memory access after each instruction execution, and the total execution time speeds up. Obviously, the fetch successive instructions are not useful when a branch instruction is encountered during the course of execution.

The method is quite effective in minimising the memory-processor speed mismatch because branch instructions do not occur frequently in a program. Figure 20 illustrates the memory interleaving architecture. A memory unit accessed by content of the data is called an associative memory or content addressable memory CAM.

This type of memory is accessed simultaneously and in parallel on the basis of data content rather than by specific address or location. When a word is written in an associative memory, no address is given. The memory is capable of finding an empty unused location to store the word. When a word is to be read from an associative memory, the content of the word, or part of the word, is specified. The memory locates all words, which match the specified content, and marks them for reading.

Because of its organization, the associative memory is uniquely suited to do parallel searches by data association. Moreover, searches can be done on an entire word or on a specific field within a word.

An associative memory is more expensive than a random access memory because each cell must have storage capability as well as logic circuits for matching its content with an external argument. For this reason associative memories are used in applications where the search time is very critical and must be very short. Hardware Organization. The block diagram of an associative memory is shown in Figure It consists of a memory array and logic for m words with n bits per word.

The argument register A and key register K each have n bits, one for each bit of a word. The match register M has m bits, one for each memory word. Each word in memory is compared in parallel with the content of the argument register; the words that match the bits of the argument register set a corresponding bit in the match register.

After the matching process, those bits in the match register that have been set indicate the fact that their corresponding words have been matched. Reading is accomplished by a sequential access to memory for those words whose corresponding bits in the match register have been set.

The key register provides a mask for choosing a particular field or key in the argument word. The entire argument is compared with each memory word if the key register contains all 1s.

Otherwise, only those bits in the argument that have 1s in their corresponding positions of the key register are compared. Thus the key provides a mask or identifying information, which specifies how reference to memory is made. To illustrate with a numerical example, suppose that the argument register A and the key register K have the bit configuration shown below.

Word 2 matches the unmasked argument field because the three leftmost bits of the argument and the word are equal. Check Your Progress 2. What is a RAID? What are the techniques used by RAID for enhancing reliability? How can the Cache memory and interleaved memory mechanisms be used to improve the overall processing speed of a Computer system? Where can we find Main Memory Location 25 in cache if a Associative Mapping b Direct mapping and c 2 way set associative 2 blocks per set mapping is used.

How is a given memory word address memory location 25 as above located to Cache for the example above for a Associative b Direct and c 2 way set associative mapping. A computer system has a 4K-word cache organised in block set associative manner with 4 blocks per set, 64 words per block. What is the number of bits in the Index and Block Offset fields of the main memory address formula? In a memory hierarchy system, programs and data are first stored in auxiliary or secondary memory.

The program and its related data are brought into the main memory for execution. What if the size of Memory required for the Program is more than the size of memory? Virtual memory is a concept used in some large computer systems that permit the user to construct programs as though a large memory space were available, equal to the totality of secondary memory.

Each address generated by the CPU goes through an address mapping from the so-called virtual address to a physical address in the main memory.

Virtual memory is used to give programmers the illusion that they have a very large memory at their disposal, even though the computer actually has a relatively small main memory. A Virtual memory system provides a mechanism for translating program-generated addresses into correct main memory locations. This is done dynamically, while programs are being executed in the CPU. The translation or mapping is handled automatically by the hardware by means of a mapping table.

Address Space and Memory Space. An address used by a programmer will be called a virtual address, and the set of such addresses the address space. An address in the main memory is called a physical address. The set of such locations is called the memory space.

Thus, the address space is the set of addresses generated by programs as they reference instructions and data; the memory space consists of the actual main memory locations directly addressable for processing. Suppose that the computer has auxiliary memory for storing information equivalent to the capacity of 16 main memories.

In a multiprogramming computer system, programs and data are transferred to and from auxiliary memory and main memory based on demands imposed by the CPU. Suppose that program 1 is currently being executed in the CPU.

Program 1 and a portion of its associated data are moved from secondary memory into the main memory as shown in Figure Portions of programs and data need not be in contiguous locations in memory since information is being moved in and out, and empty spaces may be available in scattered locations in memory.

In our example, the address field of an instruction code will consist of 20 bits but physical memory addresses must be specified with only bits. Thus CPU will reference instructions and data with a 20 bits address, but the information at this address must be taken from physical memory because access to auxiliary storage for individual words will be prohibitively long.

A mapping table is then needed, as shown in Figure 23, to map a virtual address of 20 bits to a physical address of 16 bits. The mapping is a dynamic operation, which means that every address is translated immediately as a word is referenced by CPU.

Till now we have discussed various memory components. But, how is the memory organised in the physical computer? Let us discuss various kinds of memory technologies used in personal computer. From the early days of semiconductor memory until the early s, memory was manufactured, brought and installed as a single chip. Chip density went from 1K bits to 1M bits and beyond, but each chip was a separate unit. Early PCs often had empty sockets into which additional memory chips could be plugged, if and when the purchaser needed them.

A group of chips, typically 8 to 16, is mounted on a tiny printed circuit board and sold as a unit. The entire module then holds 32MB. The first SIMMs had 30 connectors and delivered 8 bits at a time. The other connectors were addressing and control. A later SIMM had 72 connectors and delivered 32 bits at a time.

For a machine like Pentium, which expected bits at once, connectors SIMMs were paired, each one delivering half the bits needed. A DIMM is capable of delivering 64 data bits at once. Each DIMM has 84 gold patted connectors on each side for a total of connectors. How they are put on a motherboard is shown in Figure 24 c. The basic building block of the main memory remains the DRAM chip, as it has for decades.

Until recently, there had been no significant changes in DRAM architecture since the early s. A third one, that is Cache RAM, is also very popular.

In a typical DRAM, the processor presents addresses and control levels to the memory, indicating that a set of data at a particular location in memory should be either read from or written into the DRAM. After a delay, the access time, the DRAM either writes or reads the data during the access-time delay. The DRAM performs various internal functions, such as activating the high capacitance of the row and column lines, sensing the data and routing the data out through the output buffers.

The processor must simply wait through this delay, slowing system performance. With synchronous access, the DRAM moves data in and out under control of the system clock. The processor or other master issues the instruction and address information, which is latched on to by the DRAM. The DRAM then responds after a set number of clock cycles. In burst mode, a series of data bits can be clocked out rapidly after the first bit has been accessed.

The mode is useful when all the bits to be accessed are in sequence and in the same row of the array as the initial access. In addition, the SDRAM has a multiple-bank internal architecture that improves opportunities for on-chip parallelism.

The mode register specifies the burst length, which is the number of separate units of data synchronously fed onto the bus. The register also allows the programmer to adjust the latency between receipt of a read request and the beginning of data transfer.

The SDRAM performs best when it is transferring large blocks of data serially, such as for applications like word processing, spreadsheets, and multimedia. RDRAM chips are vertical packages, with all pins on one side. The chip exchanges data with the processor over 28 wires no more than 12 centimeters long. The special RDRAM bus delivers address and control information using an asynchronous block-oriented protocol. After an initial ns access time, this produces the 1.

This request contains the desired address, the type of operation and the number of bytes in the operation. First, it can be used as true cache consisting of a number of bit line. Subsequent accesses to chip result in accesses solely to the SRAM. Check Your Progress 3. How many bits are needed to specify an instruction address for this machine? In this unit, we have discussed the details of the memory system of the computer.

First we discussed the concept and the need of the memory hierarchy. Memory hierarchy is essential in computers as it provides an optimised low-cost memory system. The importance of high-speed memories such as cache memory, interleaved memory and associative memories are also described in detail. The high-speed memory, although small, provides a very good overall speed of the system due to locality of reference. The unit also contains details on Virtual Memory. For more details on the memory system you can go through further units.

It is commonly used memory in embedded systems. Hence, there are memory addresses. Each word can be thought of as being stored in an 8 bit register and there are registers connected to a common data bus internal to the chip.

Using the structure as shown in Figure 3 b , it requires only 6 bit address input. A disk array known as RAID systems is a mass storage device that uses a set of hard disks and hard disk drives with a controller mounted in a single box.

All the disks of a disk array form a single large storage unit. The RAID systems were developed to provide large secondary storage capacity with enhanced performance and enhanced reliability. The performance is based upon the data transfer rate, which is very high rather than taking an individual disk.

The reliability can be achieved by two techniques that is mirroring the system makes exact copies of files on two hard disks and stripping a file is partitioned into smaller parts and different parts of the files are stored on different disks. The Cache memory is a very fast, small memory placed between CPU and main memory whose access time is closer to the processing speed of the CPU. It acts as a high-speed buffer between CPU and main memory and is used to temporarily store data and instruction needed during current processing.

In memory interleaving, the main memory is divided into n number of equal size modules. When a program is loaded in to the main memory, its successive instruction in also available for the CPU, thus, it avoids memory access after each instruction execution and the total time speeds up. The block can be anywhere in the cache. The address mapping in Direct Mapping: The Tag is used here to check whether a given address is in a specified set. This cache has 2 blocks per set, thus, the name two way set associative cache.

For Associative mapping the Block address is checked directly in all location of cache memory. Since 1 set has 4 blocks, there are 16 sets. In a set there are 4 blocks. So, the block field needs 2 bits. Each block has 64 words. So the block offset field has 6 bits. Index Filed is of 4 bits.

Block offset is of 6 bits. Check Your Progress 3 1. A RDRAM module sends data to the controller synchronously to the clock to master, and the controller sends data to an RDRAM synchronously with the clock signal in the opposite direction.

While in DIMM, this can be mounted on both sides of the printed circuit board. The various components that form the system are linked through buses that transfer instructions, data, addresses and control information among the components. The block diagram of a microcomputer system is shown in Figure.

External devices that are under the direct control of the computers are said to be connected on-line. These devices are designed to read information into or out of the memory unit upon command from the CPU and are considered to be part of the computer system. We can broadly classify peripherals or external devices into 3 categories: Human readable: Peripherals connected to a computer need special communication links for interfacing them with the CPU. The purpose of the The major differences are: Peripherals are electromagnetic and electromechanical devices and their operations are different from the operation of the CPU and the memory, which are electronic devices.

The data transfer rate of peripherals is usually slower than the transfer rate of the CPU, and consequently a synchronization mechanism may be needed. Data codes and formats in peripherals differ from the word format in the CPU and memory.

The operating modes of peripherals are different from each other and each must be controlled so as not to disturb the operation of other peripherals connected to the CPU. To resolve these differences, computer systems include special hardware component between the CPU and peripherals to supervise and synchronize all input and output transfers.

These components are called interface units because they interface between the processor bus and the peripheral device. It controls the data exchange between the external devices and the main memory; or external devices and processor registers. For example, the control of the transfer of data from an external device to the processor might involve the following steps: The status can be busy, ready or out of order.

This communication involves the following steps:. This communication involves commands, status or data. Error detection mechanism should be in-built The error detection mechanism may involve checking the mechanical as well as data communication errors.

These errors should be reported to the processor. The examples of the kind of mechanical errors that can occur in devices are paper jam in printer, mechanical failure, electrical failure etc. The data communication errors may be checked by using parity bit.

IGNOU MCS-012 Study Material

Instead they are connected to an intermediate electronic device interface called a device controller, which in turn is connected to the system bus. It comes in the form of an electronic circuit board that plugs directly into the system bus, and there is a cable from the controller to each device it controls.

The cables coming out of the controller are usually terminated at the back panel of the main computer box in the form of connectors known as ports. Please note the following points in the diagram: These lines serve the purpose of data transfer. It is the task of the device driver to convert the logical requests from the user into specific commands directed to the device itself. For example, a user request to write a record to a floppy disk would be realised within the device driver as a series of actions, such as checking for the presence of a disk in the drive, locating the file via the disk directory, positioning the heads, etc.

In UNIX the device drivers are usually linked onto the object code of the kernel the core of the operating system. This means that when a new device is to be used, which was not included in the original construction of the operating system, the UNIX kernel has to be re-linked with the new device driver object code. This technique has the advantages of run-time efficiency and simplicity, but the disadvantage is that the addition of a new device requires regeneration of the kernel. A list of some device names is as shown below: The technique has the advantage that it makes addition of a new driver much simpler, so that it could be done by relatively unskilled users.

The additional merit is that only those drivers which are actually required need to be loaded into the main memory. SYS, which must reside in the root directory. A list of some device name is as shown below: Device name. In the Windows system, device drivers are implemented as dynamic link libraries DLLs. This technique has the advantages that DLLs contains shareable code which means that only one copy of the code needs to be loaded into memory.

Secondly, a driver for a new device can be implemented by a software or hardware vendor without the need to modify or affect the Windows code, and lastly a range of optional drivers can be made available and configured for particular devices.

In the Windows system, the idea of Plug and Play device installation is required to add a new device such as a CD drive, etc.

The objective is to make this process largely automatic; the device would be attached and the driver software loaded. Thereafter, the installation would be automatic; the settings would be chosen to suit the host computer configuration. Check Your Progress 1 1.

Com1 is a UNIX port. The buffering is done by data register. Device controller is shareable among devices. The devices are normally connected directly to the system bus. What is a device driver? Differentiate between device controller and device drivers. Binary information received from an external device is usually stored in memory for later processing.

Information transferred from the central computer into an external device originates in the memory unit. This alternative is known as direct memory access DMA. The input or output operation in such cases may involve: These commands are device specific and are used to provide specific instructions to the device, e. This command checks the status such as if a device is ready or not or is in error condition. This command is useful for input of data from input device. There are two methods for doing so.

The result is that the performance of the processor goes down tremendously. What is the solution? After taking the required action with the data, the CPU can go back to the program it was executing before the interrupt. Figure 8 shows a sequence. The processor tests for the interrupts and sends an acknowledgement signal to the device that issued the interrupt.

The minimum information required to be stored for the task being currently executed, before the CPU starts executing the interrupt routine using its registers are: Th e processor now loads the PC with the entry location of the interrupt-handling program that will respond to this interrupting condition. Once the PC has been loaded, the processor proceeds to execute the next instruction, that is the next instruction cycle, which begins with an instruction fetch. Because the instruction fetch is determined by the contents of the PC, the result is that control is transferred to the interrupt-handler program.

The execution results in the following operations:. In addition, the contents of the processor registers are also needed to be saved on the stack that are used by the called Interrupt Servicing Routine because these registers may be modified by the interrupt-handler. Figure 9 a shows a simple example. Here a user program is interrupted after the instruction at location N.

The interrupt handler next processes the interrupt. When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers, which are shown in Figure 9 b. As a result, the instruction to be executed will be from the previously interrupted program. Thus, interrupt handling involves interruption of the currently executing program, execution of interrupt servicing program and restart of interrupted program from the point of interruption.

Design issues: If multiple interrupts have occurred, how does the processor decide which one to be processed first? Multiple Interrupt Lines: The simplest solution to the problems above is to provide multiple interrupt lines, which will result in immediate recognition of the interrupting device.

Priorities can be assigned to various interrupts and the interrupt with the highest priority should be selected for service in case a multiple interrupt occurs.

But providing multiple interrupt lines is an impractical approach because only a few lines of the system bus can be devoted for the interrupt. Software Poll: Once the correct interface is identified, the processor branches to a device-service routine specific to that device. The disadvantage of the software poll is that it is time consuming. Daisy chain: This scheme provides a hardware poll. With this technique, an interrupt acknowledge line is chained through various interrupt devices. When the processor senses an interrupt, it sends out an interrupt acknowledgement.

The first device which has made the interrupt request thus senses the signal and responds by putting in a word which is normally an address of interrupt servicing program or a unique identifier on the data lines.

This word is also referred to as interrupt vector. This address or identifier in turn is used for selecting an appropriate interruptservicing program. The daisy chaining has an in-built priority scheme, which is determined by the sequence of devices on interrupt acknowledge line.

Bus arbitration: In this scheme, since only one of the interfaces can control the bus, therefore only one request can be made at a time. An interrupt vector normally contains the address of the interrupt serving program. An example of an interrupt vector can be a personal computer, where there are several IRQs Interrupt request for a specific type of interrupt. These two types of drawbacks can be overcome with a more efficient technique known as DMA, which acts as if it has taken over control from the processor.

Hence, the question is: Thus, DMA involves an additional interface on the system bus. A technique called cycle stealing allows the DMA interface to transfer one data word at a time, after which it must return control of the bus to the processor. The starting location on the memory where the information will be read or written to be communicated on the data lines and is stored by the DMA interface in its address register. The number of words to be read or written is communicated on the data lines and is stored in the data count register.

The DMA interface transfers the entire block of data, one word at a time, directly to or from memory, without going through the processor. When the transfer is complete, the DMA interface sends an interrupt signal to the processor. Thus, in DMA the processor involvement can be restricted at the beginning and end of the transfer, which can be shown as in the figure above. But the question is when should the DMA take control of the bus?

For this we will recall the phenomenon of execution of an instruction by the processor. Figure 11 below shows the five cycles for an instruction execution. The Figure also shows the five points where a DMA request can be responded to and a point where the interrupt request can be responded to.

Please note that an interrupt request is acknowledged only at one point of an instruction cycle, and that is at the interrupt cycle. The DMA mechanism can be configured into a variety of ways. Some possibilities are shown below in Figure 12 a , in which all interfaces share the same system bus. The Figure 12 b configuration suggests advantages over the one shown above.

Such a configuration shown in Figure 12 c is quite flexible and can be extended very easily. Is this technique useful in Multiprogramming Operating Systems? Give reason. What are the techniques of identifying the device that has caused the Interrupt? What is DMA? These can be summarised as: A selector channel controls multiple high-speed devices and, at any one time, is dedicated to the transfer of data with one of those devices. If the devices are slow then byte multiplexer is used.

Let us explain this with an example. If we have three slow devices which need to send individual bytes as: These devices are called block multiplexer. This interface can be characterised into two main categories: In parallel interface multiple bits can be transferred simultaneously.

The parallel interface is normally used for high-speed peripherals such as tapes and disks. The dialogues that take place across the interface include the exchange of control information and data. In serial interface only one line is used to transmit data, therefore only one bit is transferred at a time. Serial printers are used for serial printers and terminals. With a new generation of high-speed serial interfaces, parallel interfaces are becoming less common. The dialogue for a read or write operation is as follows: For example keyboard, printer and external modems are point-to-point links.

A multipoint external interface used to support external mass storage devices such as disk and tape drives and multimedia devices such as CD-ROM, video, audio. Two important examples of external interfaces are FireWire and InfiniBand.

What is the need of external Communication Interfaces? These techniques are useful for increasing the efficiency of the input-output transfer process. The concepts of device drivers for all types of operating systems and device controllers are also discussed with this unit.

IGNOU MCS new revised book/block download - Ignou Study Helper : Sunil Poonia

You can always refer to further reading for detail design. The difference between device driver and controller are: Having separate line for a device, thus direct recognition. A software driven roll call to find from devices whether it has made an interrupt request.

A hardware driven passing the buck type signal that moves through the devices connected serially. The device on receipt of signal on his turn, if has interrupt informs its address.

The external interfaces are the standard interfaces that are used to connect third party or other external devices. The standardization in this area is a must. In this unit we will discuss the secondary storage devices such as magnetic tapes, magnetic disks and optical disks, also known as backing storage devices. The main purpose of such a device is that it provides a means of retaining information on a permanent basis.

The main discussion provides the characteristics of hard-drives, formatting, drive cache, interfaces, etc. The detailed discussion on storage devices is being presented in the Unit. The storage technologies have moved a dimension from very small storage devices to Huge Giga byte memories. Let us also discuss some of the technological achievements that made such a technology possible. Storage is the collection of places where long-term information is kept.

At the end of the unit you will be able to:. As discussed in Block 2 Unit 1, there are several limitations of primary memory such as limited capacity, that is, it is not sufficient to store a very large volume of data; and volatility, that is, when the power is turned off the data stored is lost.

Thus, the secondary storage system must offer large storage capacities, low cost per bit and medium access times. Magnetic media have been used for such purposes for a long time.

Current magnetic data storage devices take the form of floppy disks and hard disks and are used as secondary storage devices. But audio and video media, either in compressed form or uncompressed form, require higher storage capacity than the other media forms and the storage cost for such media is significantly higher.

Optical storage devices offer a higher storage density at a lower cost. CD-ROM can be used as an optical storage device. This technology has been the main catalyst for the development of multimedia in computing because it is used in the multimedia external devices such as video recorders and digital recorders Digital Audio Tape which can be used for the multimedia systems. Removable disk, tape cartridges are other forms of secondary storage devices are used for back-up purposes having higher storage density and higher transfer rate.

The Disks are normally mounted on a disk drive that consists of an arm and a shaft along with the electronic circuitry for read-write of data. The disk rotates along with the shaft.

A non-removable disk is permanently mounted on the disk drive. One of the most important examples of a non-removable disk is the hard disk of the PC. The disk is a platter coated with magnetic particles. Early drives were large. Later on, smaller hard rigid disk drivers were developed with fixed and removable pack. Each pack held about 30MB of data and became known as the Winchester drive.

Most Winchester drives have the following common features: Each ring is called a track, is subdivided into a number of sectors, each sector holding a specific number of data elements called bytes or characters. The smallest unit that can be written to or read from the disk is a sector. The disk is divided into concentric rings called tracks.

Each track is subdivided into a number of sectors. Each sector contains a specific number of bytes or characters. Typical sector capacities are , , , and bytes. Bad Blocks: The drive maintains an internal table which holds the sectors or tracks which cannot be read or written to because of surface imperfections. This table is called the bad block table and is created when the disk surface is initially scanned during a low-level format.

Sector Interleave: This refers to the numbering of the sectors located in a track. A one to one interleave has sectors numbered sequentially 0,1,2,3,4 etc. The disk drive rotates at a fixed speed RPM, which means that there is a fixed time interval between sectors. A slow computer can issue a command to read sector 0, storing it in an internal buffer.

While it is doing this, the drive makes available sector 1 but the computer is still busy storing sector 0. Thus the computer will now have to wait one full revolution till sector 1 becomes available again.

MCS-012 Computer Organisation and Assembly Language Programming Study Material Download

Renumbering the sectors like 0,8,1,9,2,10,3,11 etc. This means that the sectors are alternated, giving the computer slightly more time to store sectors internally than previously. Drive Speed: The amount of information that can be transferred in or out of the memory in a second is termed as disk drive speed or data transfer rate.

The speed of the disk drive depends on two aspects, bandwidth and latency. The bandwidth can be measured in bytes per second. The sustained bandwidth is the average data rate during a large transfer, i. The effective bandwidth is the overall data rate provided by the drive. The disk drive bandwidth ranges from less than 0. Access latency: A disk access simply moves the arm to the selected cylinder and waits for the rotational latency, which may take less than 36ms. The latency. An average latency of a disk system is equal to half the time taken by the disk to rotate once.

Hence, the average latency of a disk system whose rotation speed is RPM will be 0. Rotation Speed: This refers to the speed of rotation of the disk. Access Time: The access time is the time required between the requests made for a read or write operation till the time the data are made available or written at the requested location. Normally it is measured for read operation. The access time depends on physical characteristics and access mode used for that device. The disk access time has two major components: Seek Time: The seek time is the time for the disk arm to move the heads to the cylinder containing the desired sector.

Latency Time: The latency time is the additional time waiting for the disk to rotate the desired sector to the disk head. Because of this huge capacity, instead of having only one operating system in our PC, partitions are used to provide several separate areas within one disk, each treated as a separate storage device.

That is, a disk partition is a sub-division of the disk into one or more areas. Each partition can be used to hold a different operating system. The computer system boots from the active partition and software provided allows the user to select which partition is the active one. For example, we can run both Windows and Linux operating systems from the same storage of the PC.

A new magnetic disk is just platters of a magnetic recording material. Before a disk can store data, it must be divided into sectors that the disk controller can read and write. This is called low level formatting. Low level formatting fills the disk with a special data structure for each sector, which consists of a header, a data area, and a trailer. The low level formatting is placing track and sector information plus bad block tables and other timing information on the disks.

Sector interleave can also be specified at this time. The operating system allocates disk space on demand by user programs. Generally, space is allocated in units of fixed size called an allocation unit or a cluster, which is a simple multiple of the disk physical sector size, usually bytes.

The DOS operating system forms a cluster by combining two or more sectors so that the smallest unit of data access from a disk becomes a cluster, not a sector. Normally, the size of the cluster can range from 2 to 64 sectors per cluster. Often this also means transferring the boot file for the operating system onto the hard disks.

It contains information about the space used by each individual file, the unused disk space and the space that is unusable due to defects in the disk. Since FAT contains vital information, two copies of FAT are stored on the disk, so that in case one gets destroyed, the other can be used. A FAT entry can contain any of the following: There is one entry in the FAT for each cluster in the file area, i. Therefore, the cluster number 2 corresponds to the first cluster in the data space of the disk.

The FAT chain for a file ends with the hexadecimal value, i. The FAT structure can be shown as in Figure 2 below.

Limitation of FAT That size suffices for any hard disk with less than a MB total capacity. For such a large volume, the cluster size is 32KB. The cluster entry for FAT32 uses bit numbers. Microsoft has reserved the top four bits of every cluster number in a FAT32 file That means there are only bits for the cluster number, so the maximum cluster number possible is ,, In the UNIX system, the information related to all these fields is stored in an Inode table on the disk.

For each file, there is an inode entry in the table. Each entry is made up of 64 bytes and contains the relevant details for that file. These details are: In the latest technologies such memory can be a part of disk drive itself. Such memory is sometimes called hard disk cache or buffer.

These hard disk caches are more effective, particularly applicable in multiprogramming machines or in disk file servers, but they are expensive, and therefore are smaller.

Almost all-modern disk drives include a small amount of internal cache. The cycle time of cache would be about a tenth of the main memory cycle time and its cost about 10 times the cost per byte of main memory.

The disk caching technique can be used to speed up the performance of the disk drive system. A set cache of buffers is allocated to hold a number of disk blocks which have been recently accessed. In effect, the cached blocks are in memory copies of the disk blocks.

If the data in a cache buffer memory is modified, only the local copy is updated at that time. Hence processing of the data takes place using the cached data avoiding the need to frequently access the disk itself.

The main disadvantage of the system using disk caching is risking loss of updated information in the event of machine failures such as loss of power. For this reason, the system may periodically flush the cache buffer in order to minimize the amount of loss.

The disk drive cache is essentially two-dimensional-all the bits are out in the open. In order that devices manufactured by independent vendors can be used with different computer manufacturers, it is important that the controllers follow some drive interfacing standard.

Following are the commonly used drive interface standards: The common drive used today for workstations has capacities of 40MB to The controller is embedded on the disk drive itself.

It is an interface between the disk controller and an adopter located on the motherboard. It has good access time of 20ms and data transfer rates of about 1Mbps under ideal conditions. Drives are reasonably cheap. The common drive choice for servers or high-end workstations with drive capacities ranges from MB to 20GB and rotation speed RPM.

The SCSI interface of a device contains all circuitry that the device needs to operate with the computer system. The SCSI bus is a bus designed for connecting devices to a computer in a uniform way.

These drives have fast access time and high data rates but are expensive. Each device must be assigned a unique SCSI identification between 0 and 7 or The SCSI-2 enables the use of multiple cables to support or even bit data transfers in parallel. The SCSI-3 enables the use of multiple cables to support or even bit data transfers in parallel. The rotation speed is RPM. Its feature include 9. Modern EIDE interfaces enable much faster communication.

The speed increases due to improvements in the protocol that describes how the clock cycles will be used to address devices and transfer data. The ATA33 enables up to The ATA66 enables up to The seek time of a disk is 30ms. It rotates at the rate of 30 rotations per sec. Each track has sectors. What is the access time of the disk? A disk drive with removable disks is called a removable drive.

A removable disk can be replaced by another similar disk on the same or different computer, thus providing enormous data storage that is not limited by the size of the disk. The disk rotates at RPM.

Floppies can be accessed from both the sides of the disk. Floppy diskette drives are attached to the motherboard via a wire ribbon cable. You can attach zero, one or two floppy drives and how you connect each one determines whether a drive become either A: A typical floppy drive and floppy is as shown in Figure 4.

A floppy is about 0. The data are organized in the form of tracks and sectors. The tracks are numbered sequentially inwards, with the outermost being 0. The write-protect notch is used to protect the floppy against deletion of recorded data by mistake. The data in a sector are stored as a series of bits. Once the required sector is found, the average data transfer rate in bytes per second can be computed by the formula: The advent of the compact disk digital audio system, a non-erasable optical disk, paved the way for the development of a new low-cost storage technology.

In optical storage devices the information is written using a laser beam. This technology has evolved out of the entertainment electronics market where cassette tapes and long playing records are being replaced by CDs.

The term CD used for audio records stands for Compact Disk. It can store around MB.

As the disk rotates the laser beam traces out a continuous spiral. The focused beam creates a circular pit of around 0. Solution Paper - June 6. Solution Paper - Dec 7. Solution Paper - June 8. Solution Paper - Dec 9. Solution Paper - June Solution Paper - Dec Question Paper - June Question Paper - Dec Publish your own book. Choose Currency. Shopping Cart My Cart -. New Syllabus - - - - B. Ed Old Syllabus B. Ed Entrance M.

Phil B. Com M. Com B. NIOS D. Help Books D. Assignments D. Datesheet D. Hall Tickets D. Question Paper D. Results D. Tips D. Contact Us. Refer Friends. In stock. SKU Be the first to review this product. Add to Cart. Paperback Publisher:

HYON from Georgia
I do love exploring ePub and PDF books worriedly. Also read my other posts. I have always been a very creative person and find it relaxing to indulge in kickball.