1)RISC processor employs
c)delay branch strategy
d)none of these
2)………addressing mode facilitates access to an operand whose location is defined relative to the beginning of the data structure in which it appears
3)The number of instructions which may possibly be executed by a depend upon the size of
4)Stack pointer is affected by
A)Subroutine call B)Return C)Conditional branch
a)A and B only
b)A and C only
c)B and C only
D)A,B and C only
5)DRAM stores information using
d)None of these
6)The DMA differs from the interrupt mode by
a) The involvement of the processor for the operation
b) The method accessing the I/O devices
c) The amount of data transfer possible
d) Both a and c
7)Buffer cache are used
a)To handle interrupts
b)Speed up main memory read operations
c)Increase the capacity of main memory
d)Improve disk performance
8)Memory mapping table is used to
a)Translate virtual address to physical address
b)Translate physical address to virtual address
c)Both (a) and (b)
d)None of these
9)The assembler stores the object code in ______ .
a) Main memory
d) Magnetic disk
10)RAID configuration of disks are used to provide
c)High Data density
d)Both (a) and (b)
RISC (Reduced Instruction Set Computer) is used in portable devices due to its power efficiency. For Example, Apple iPod and Nintendo DS. RISC is a type of microprocessor architecture that uses highly-optimized set of instructions. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program Pipelining is one of the unique feature of RISC. It is performed by overlapping the execution of several instructions in a pipeline fashion. It has a high performance advantage over CISC.
Addressing mode is the way of addressing a memory location in instruction. Microcontroller needs data or operands on which the operation is to be performed. The method of specifying source of operand and output of result in an instruction is known as addressing mode.
The different types of instructions are as follows:
– Immediate Mode: In Immediate Addressing Mode , the data immediately follows the instruction. This means that data to be used is already given in the instruction itself.
– Register Mode: The operand is the contents of a register. We specify the operand in this case by specifying the name of the register in the instruction. Processor registers are often used for intermediate storage during arithmetic operations. This addressing mode is used at that time to access the registers.
– Direct address mode: The address part of an instruction in this mode is the effective address.
– Indexed addressing mode: Offset is added to the base index register to form the effective address if the memory location.This Addressing Mode is used for reading lookup tables in Program Memory. The Address of the exact location of the table is formed by adding the Accumulator Data to the base pointer.
– Relative address mode: In this mode in order to find out the effective address the contents of the program counter are added to the address part of the instruction.
Single instruction (command per line) that is being executed by the CPU at the time of execution.The values are first loaded into instruction register and then executed.
Operation which typically affect the stack are:
subroutine calls and returns.
interrupt calls and returns.
code explicitly pushing and popping entries.
direct manipulation of the SP register.
DRAM. It is considered a type of random-access memory (RAM), which means that information stored on it can be accessed in a random order. This makes it a really fast type of storage and therefore, ideal for use as the main memory for a computer system.
Information in DRAM is stored using an integrated circuit which contains transistors and capacitors. A single pair of one transistor and one capacitor stores one bit of information. The capacitor can hold a low charge (also called discharged) or a high charge (also called charged), which correspond to 0 and 1, respectively. The transistor acts as a switch to read the state of the capacitor or to change its state. Capacitors slowly leak their charge, and therefore, the capacitor charge needs to be refreshed regularly by restoring the charge. That is the reason this type of RAM is called dynamic.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. DMA module controls exchange of data between main memory and the I/O device. Because of DMA device can transfer data directly to and from memory, rather than using the CPU as an intermediary, and can thus relieve congestion on the bus. CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.Each DMA transfers approximately 2 MB of data per second.
Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages the data transfers and arbitrates access to the system bus.Cycle stealing may also be necessary to allow the CPU and DMA controller to share use of the memory bus.
The buffer cache is main memory (i.e. RAM) used as a cache to reduce the number of physical read/writes from mass-storage devices (like hard disks, for example), since these operations are slower by several orders of magnitude compared to the same operations done on main memory.The buffer cache is just one of several types of cache used to reduce to a minimum the number of slow operations.
Memory mapping is the (complex) process that associates an address value (usually a 32 or 64 bits number) to some existing physical location in the hardware. This location can be in RAM, in a cache of some level, or even on the hard disk. During program execution, data can move from one location to another, and possibly be duplicated.The table used for mapping virtual address to physical address in paging environment is called memory mapping table.
After compiling the object code, the assembler stores it in the magnetic disk and waits for further execution.
RAID(Redundant array of inexpensive disks), but now it usually refers to a redundant array of independent disks. RAID storage uses multiple disks in order to provide fault tolerance, to improve overall performance, and to increase storage capacity in a system. This is in contrast with older storage devices that used only a single disk drive to store data.RAID allows you to store the same data redundantly (in multiple paces) in a balanced way to improve overall performance. RAID disk drives are used frequently on servers but aren’t generally necessary for personal computers.