Search Topics Here

All the Topics Links to Sponsers

INTRODUCTION:
Computer is a calculating device that can perform arithmetic operations at enormous speed. It is a data processor that can store ,process and retrieve data as and when desired. Computers are broadly classified into analog computer and digital computer. Analog computers can be used to study the direct simulation of a physical system and work with analog signals. Digital computers work with digital inputs and process the data to achieve the desired objective with greater precision Thus to simulate a physical process the input quantities are first converted into binary data. Present day digital computers not only calculate but also analyze the data, take decision, control external processes to optimize the performance of a system. Digital computers thus find applications in scientific calculations, space guidance, traffic control, commercial and business data processing BASIC COMPONENTS OF A DIGITAL COMPUTER:







CRT
Printer
Floppy disk
Hard disk
Magnetic tape

Keyboard
Floppy disk
CD
Hard disk



Information bus
A computer consists of four major operational divisions. They are,

1) The input peripherals
2) The output peripherals
3) The central processing unit
4) Memory unit

INPUT AND OUTPUT:

Input from outside world originates in a number of media. Through this input unit a complete set of information and data are fed into the computer systems and into the memory unit ,to be stored until needed. The flow of data into the computer and the processed data out of the computer is shown in the block schematic below.

Data conversion
Input unit
Memory
Output unit
Data input
Data in machine language
Coded data
Processed data in machine language
Data transformed into readable form
Data out
The processor, in which data is processed, is not shown here. There are varieties of input and output units used in computer .The most commonly used units are floppy disk readers, keyboard of video terminals, magnetic ink character readers, optical scanners etc.
Functions of input unit:
1) It accepts the list of instructions and data from the outside world.
2) It converts these instructions and data into computer acceptable form.
3) It supplies the converted instructions and data to the computer system for further
processing.
Functions of output unit:
1) It accepts the results produced by the computer which are in coded form and hence
cannot be easily understood by us.
2) It converts these coded results into human readable form.
3) It supplies the converted result to the outside world.
Eg: Monitor.


MEMORY UNITS:

The memory stores data and instructions received from the input unit. It consists of a large no of locations where data area stored. The memory subsystem of a computer also deals with storage of set of instructions. Results of the computation are also stored in memory. The storage mechanism and the hardware / software needed to control and manage the information constitute the real memory system. The CPU reads the instruction and operand from the memory and stores results in the memory after execution of the instruction. It is desirable that the memory subsystem provides the information sought by the CPU, as and when required without any wait-time for that .But traditionally there exist a speed mismatch between CPU and memory. To reduce this, the CPU and memory cycle times should be equal. In computers memory system will be a collection of different types of memory units connected in different places and works with different speeds, capacities, and costs.

A memory unit stores binary information in groups called words, each being stored in a memory register. A word in memory is an entity of “n” bits that moves in and out of storage as a unit .A memory word may represent an operand, and instruction , a group of alpha numeric characters or any binary coded information. The communication between a memory unit and its environment is achieved through two control signals and two external registers. The control signals specify the direction of transfer required, i.e., whether a word is for storing into the memory or reading a previously stored data. One of the external registers specify the particular memory register selected out of thousands available, and the other specifies the particular bit configuration of word in question. The control signal, the registers and the communication are shown in the figure below. The two control signals applied to the memory unit are called read and write .A write signal specifies transfer in function and a read signal specifies a transfer out function. The information transfer to and registers in memory units and world is communicated through a common register called memory buffer register

Memory address register
MEMORY UNIT

n words

m bits per word
Memory buffer register

in
out
Read
Write
Input drawings




CENTRAL PROCESSING UNIT:

CPU-CENTRAL PROCESSING UNIT is the brain of the computer. It interprets the program instruction n and controls the whole system. It has two parts.

1 .Arithmetic and Logic unit.(ALU)
2 .Control unit.

ALU performs all the arithmetic operations such as addition ,subtraction ,multiplication ,division together with the logic operations as specified by a program. The data and the instructions stored in the memory before processing are transferred to the ALU as and when required. The flow of data between the memory unit and the ALU takes place both ways several times until the processing is complete. The control unit directs the other units what to do and when to do and supervises the flow of information among the various units. The control unit recovers the instructions one by one from the program already stored in the memory. For each instruction, the control unit informs the ALU to perform the operation specified by the instruction and also confirms that the necessary data are supplied from the memory. To establish the coordination among all the sections of the computer, the control unit delivers the necessary control signals.


INPUT AND OUTPUT DEVICES
Input and output devices allow the computer system to interact with the outside world by moving data into and out of the system. An input device is used to bring data into the system. Some input devices are:
· Keyboard
· Mouse
· Microphone
· Bar code reader
An output device is used to send data out of the system. Some output devices are:
· Monitor
· Printer
· Speaker
·
INPUT / OUTPUT AND STORAGE DEVICES
INPUT
OUTPUT
STORAGE
Keyboard
Monitor
Floppy Disk
Mouse
Printers (all types)
Diskette
Trackballs
Audio Card
Hard Disk
Touchpads
Plotters
Disk Cartridge
Pointing Sticks
LCD Projection Panels
CD-ROM
Joysticks
Computer Output Microfilm (COM)
Optical Disk
Pen Input
Facsimile (FAX)
Magnetic Tape
Touch Screen
Speaker(s)
Cartridge Tape
Light Pen
-
Reel Tape
Digitizer
-
PC Card
Graphics Tablet
-
*RAID
Scanner
-
*Memory Button
Microphone
-
*Smart Card
Electronic Whiteboard
-
*Optical Memory Card
Video Cards
-
-
Audio Cards
-
-











Input/output devices are usually called I/O devices. They are directly connected to an electronic module inside the systems unit called a device controller. For example, the speakers of a multimedia computer system are directly connected to a device controller called an audio card (such as a SoundBlaster), which in turn is connected to the rest of the system. Sometimes secondary memory devices like the hard disk are called I/O devices (because they move data in and out of main memory.) What counts as an I/O device depends on context. To a user, an I/O device is something outside of the system box. To a programmer, everything outside of the processor and main memory looks like an I/O devices. To an engineer working on the design of a processor, everything outside of the processor is an I/O device.
KEYBOARD:

The keyboard is an input device. It has letter and number keys, and what are called function keys, computer specific task keys, that allow you, the user, to use an English-like language to issue instructions to an electronic environment. It is the primary input device. It uses a cursor to keep your place on the screen and to let you know where to begin typing. You are able to input commands, type data into documents, compose documents, draw pictures with use of certain keys, pull down menus, and respond to prompts issued by the computer. Almost all computers require you to use a keyboard unless, of course, it is adapted for individuals with disabilities or for a specified alternative input devices.
The keyboard contains special keys to manipulate the user interface. When a key is touched, an electrical impulse is sent through the device which is picked up by the operating system software, and sent through the computer to be processed.
The keyboard operates as a typical typewriter and uses a standard "QWERTY" keyboard. QWERTY is the way the keyboard is set up for typing. If you look at the keyboard under the top number row, you will see that the alphabet top row begins with QWERTY.
Special Features: Special features of the keyboard include:
Numeric keypad: This portion of the keyboard allows you to use the keyboard like a calculator and input numbers into application programs. It has a Num lock key that when depressed, will activate that portion of the keyboard so that numbers can be entered. When the lock key is not on, there are arrow keys on the keys which then work to move the cursor in different directions. The "NUM LOCK" key is a toggle key which switches back and forth between these two modes.
Caps Lock: The "CAP LOCKS" key works in this same manner as the "NUM LOCK" key. If the Cap Lock is lit on your screen the keyboard will type only in capitals. If the Cap Lock light is not lit it will type only in small letters.
Function Keys: The function keys are used to initiate commands on help menus or database programs especially before the development and use of computer pointing devices. They are still used extensively today as options on the keyboard to pull down menus or to be programmed to do specific functions in application programs. Ctrl or Shift keys also work with Function keys to add more commands to programs and what are called shortcuts, ways to operate functions like saving and deleting without going through elaborate features and steps. Short cuts speed up typing and input into the computer.
Escape Key: One of the most important keys is the escape key. It usually cancels the last command or takes you back to the previous step in a program.
Types: Keyboards come in may shapes and sizes. They can be large and small, almost like a custom car. They come in various colors and can be designed specifically for the user, especially in the case of the disabled.
QWERTY: The most popular is the standard QWERTY keyboard. The newer keyboards can have a trackball built into the keyboard. This allows the user the convenience of a built in pointing device. The trackball acts as the mouse and saves time and space in the work area.
ERGONOMIC: This keyboard is built so that the keyboard is divided into two parts. One half fits the right hand and the other half fits the left hand. This split keyboard arrangement is built to fit the natural positioning of the hand and to help with repetitive motion hand injury which occurs when a job is carried out over and over again, such as in keyboarding.
POINTING DEVICES:

A pointing device is an input interface (specifically a human interface device) that allows a user to input spatial (ie, continuous and multi-dimensional) data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures — point, click, and drag — for example, by moving a hand-held mouse across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer (or cursor) and other visual changes. While the most common pointing device by far is the mouse, many more devices have been developed. However, mouse is commonly used as a metaphor for devices that move the cursor.

· MOUSE: A mouse (plural mice or mouses) is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of a small case, held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features can add more control or dimensional input. The mouse's motion typically translates into the motion of a pointer on a display which allows for fine control of a Graphical user interface.


· MINI-MOUSE: A small egg-sized mouse for use with laptop computers; usually small enough for use on a free area of the laptop body itself, it is typically optical, includes a retractable cord and uses a USB port to save battery life.

· TOUCHPAD: A flat surface that can detect finger contact, this is the norm for modern laptop computers; at least one physical button normally comes with the touchpad, but the user can also generate a mouse click by tapping on the pad; advanced features include pressure sensitivity and special gestures such as scrolling by moving one's finger along an edge.

· TRACKBALL: A rollable ball mounted in a fixed base; essentially an upside-down mouse.


· GRAPHICS TABLET: A special tablet similar to a touchpad, but controlled with a pen or stylus that is held and used like a normal pen or pencil; the thumb usually controls the clicking via a two-way button on the top of the pen, or by tapping on the tablet's surface.

· JOYSTICK: where the user can freely change the position of the stick, with more or less constant force.
Joystick elements:
1. Stick
2. Base
3. Trigger
4. Extra buttons
5. Autofire switch
6. Throttle
7. Hat Switch (POV Hat)
8. Suction Cup
· TOUCHSCREEN: Framed around the monitor and resembling a monitor shield, this device uses software calibration to match screen and cursor positions; many firms will integrate touchscreen equipment into existing displays and all-in-one devices (such as portables PCs) for a fee.

IMAGE SCANNER:

In computing, a scanner is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image. Common examples found in offices are variations of the desktop (or flatbed) scanner where the document is placed on a glass window for scanning. Hand-held scanners, where the device is moved by hand, were briefly popular but are now less common due to the difficulty of obtaining a high-quality image. Mechanically driven scanners that move the document are typically used for large-format documents, where a flatbed design would be impractical.
Modern scanners typically use a charge-coupled device (CCD) or a Contact Image Sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. A rotary scanner, used for high-speed document scanning, is another type of drum scanner, using a CCD array instead of a photomultiplier. Other types of scanners are planetary scanners, which take photographs of books and documents, and 3D scanners, for producing three-dimensional models of objects.
Another category of scanner is digital camera scanners, which are based on the concept of reprographic cameras. Due to increasing resolution and new features such as anti-shake, digital cameras have become an attractive alternative to regular scanners. While still having disadvantages compared to traditional scanners, digital cameras offer advantages in speed and portability.


MICROPHONE:

A microphone, sometimes referred to as a mike or mic, is an acoustic-to-electric transducer or sensor that converts sound into an electrical signal.
Microphones are used in many applications such as telephones, tape recorders, hearing aids, motion picture production, live and recorded audio engineering, in radio and television broadcasting and in computers for recording voice, VoIP, and for non-acoustic purposes such as ultrasonic checking.


OPTICAL CHARACTER RECOGNITION:
Optical character recognition, usually abbreviated to OCR, is the mechanical or electronic translation of images of handwritten, typewritten or printed text (usually captured by a scanner) into machine-editable text.
OCR is a field of research in pattern recognition, artificial intelligence andmachinevision. Though academic research in the field continues, the focus on OCR has shifted to implementation of proven techniques. Optical character recognition (using optical techniques such as mirrors and lenses) and digital character recognition (using scanners and computer algorithms) were originally considered separate fields. Because very few applications survive that use true optical techniques, the OCR term has now been broadened to include digital image processing as well.
Early systems required training (the provision of known samples of each character) to read a specific font. "Intelligent" systems with a high degree of recognition accuracy for most fonts are now common. Some systems are even capable of reproducing formatted output that closely approximates the original scanned page including images, columns and other non-textual components.
OPTICAL MARK READER:
The Optical Mark Reader is a device the "reads" pencil marks on NCS compatible scan forms such as surveys or test answer forms. If that all seems overly technical to you, just think of it as the machine that checks multiple choice computer forms. In this document The Optical Mark Reader will be referred to as the scanner or OMR. The computer test forms designed for the OMR are known as NCS compatible scan forms. Tests and surveys completed on these forms are read in by the scanner, checked, and the results are saved to a file. This data file can be converted into an output file of several different formats, depending on which type of output you desire.
The OMR is a powerful tool that has many features. If you are using casstat (grading tests), the OMR will print the number of correct answers and the percentage of correct answers at the bottom of each test. It will also record statistical data about each question. This data is recorded in the output file created when the forms are scanned. You’ll find out more about the data file and output formats available later on in this document.

BARCODEREADER:
A barcode reader (or barcode scanner) is an electronic device for reading printed barcodes. Like a flatbed scanner, it consists of a light source, a lens and a photo conductor translating optical impulses into electrical ones. Additionally, nearly all barcode readers contain decoder circuitry analyzing the barcode's image data provided by the photo conductor and sending the barcode's content to the scanner's output port.
VISUAL DISPLAY UNIT:


A visual display unit, often called simply a monitor, is a piece of electrical equipment which displays viewable images generated by a computer without producing a permanent record. A computer display device is usually either acathode ray tube or some form of flat panel such as aTFT LCD. The monitor comprises the display device, circuitry to generate a picture from electronic signals sent by the computer, and an enclosure or case. Within the computer, either as an integral part or a plugged-ininterface , there is circuitry to convert internal data to a format compatible with a monitor.

The inch size is the diagonal size of the picture tube or LCD panel. With 4:3 CRTs the picture is squarer than 16:10 TFT and so has a larger area for the same diagonal, hence a 17" CRT generally gives about the same area of picture as a 19" TFT.
This method of size measurement dates from the early days of CRT television when round picture tubes were in common use, which only had one dimension that described display size. When rectangular tubes were used, the diagonal measurement of these was equivalent to the round tube's diameter, hence this was used (and of course it was the largest of the available numbers).
A better way to compare CRT and LCD displays is by viewable image size.
DIGITAL MONITORS:
Early digital monitors are sometimes known as TTLs because the voltages on the red, green, and blue inputs are compatible with TTL logic chips. Later digital monitors supportLVDS, orTMDS protocols.

Modern technology:
Analog RGB monitors: Most modern computer displays can show thousands or millions of different colors in the RGB colour space by varying red, green, and blue signals in continuously variable intensities.
Digital and analog combination: Many monitors have analog signal relay, but some more recent models (mostly LCD screens) support digital input signals. It is a common misconception that all computer monitors are digital. For several years, televisions, composite monitors, and computer displays have been significantly different. However, as TVs have become more versatile, the distinction has blurred.
COMPUTER PRINTER:
A computer printer, or more commonly a printer, produces a hardcopy (permanent human-readable text and/orgraphics) of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local computer peripherals ,and are attached by a printer to a computer which serves as a document source. Some printers, commonly known as network printers, have built-in network interfaces (typically wireless or Ethernet), and can serve as a hardcopy device for any user on the network. Individual printers are often designed to support both local and network connected users at the same time.
In addition, many modern printers can directly interface to electronic media such as memory sticks or memory cards, or to image capture devices such as digital cameras,scanners; some printers are combined with a scanners and/or faxmachines in a single unit. Printers that include non-printing features are sometimes called Multifunction printers.A printer which is combined with a scanner can function as a kind of photocopier if so designed.. Printers are designed for low-volume, short-turnaround print jobs; requiring virtually no setup time to achieve a hard copy of a given document. However, printers are generally slow devices (30 pages per minute is considered fast; and many consumer printers are far slower than that), and the cost-per-page is relatively high. In contrast, the printing press (which serves much the same function), is designed and optimized for high-volume print jobs such as newspaper print runs--printing presses are capable of hundreds of pages per minute or more, and have an incremental cost-per-page which is a fraction of that of printers.
The printing press remains the machine of choice for high-volume, professional publishing. However, as printers have improved in quality and performance, many jobs which used to be done by professional print shops are now done by users on local printers; see desktoppublishing. The world's first computer printer was a 19th century mechanically driven apparatus invented byCharles Babbage for hisDifference Engine
Printing technology:
Printers are routinely classified by the underlying print technology they employ; numerous such technologies have been developed over the years.The choice of print engine has a substantial effect on what jobs a printer is suitable for, as different technologies are capable of different levels of image/text quality, print speed, low cost, noise; in addition, some technologies are inappropriate for certain types of physical media.
Another aspect of printer technology that is often forgotten is resistance to alteration: liquid ink such as from an inkjet head or fabric ribbon becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.Checks should either be printed with liquid ink or on special "check paper with toner anchorage". For similar reasons carbon film ribbons for IBM Selectric typewriters bore labels warning against using them to type negotiable instruments such as checks. The machine-readable lower portion of a check, however, must be printed using MICRtoner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.
INKJET PLOTTER:
A plotter is a vector graphics printing device that connects to a computer.Pen Plotters print their output by moving a pen across the surface of a piece of paper. This means that plotters are restricted to line art, rather than raster graphics as with other printers. They can draw complex line art, including text, but do so very slowly because of the mechanical movement of the pens. (Pen Plotters are incapable of creating a solid region of color; but can hatch an area by drawing a number of close, regular lines.) When computer memory was very expensive, and processor power was very slow, this was often the fastest way to produce color high-resolution vector-based artwork, or very large drawings efficiently.
Traditionally, printers are primarily for printing text. This makes it fairly easy to control, simply sending the text to the printer is usually enough to generate a page of output. This is not the case of the line art on a plotter, where a number ofprinter control languages were created to send the more detailed information like "draw a line from here to here".

LOUDSPEAKER:

A loudspeaker, speaker, or speaker system is an electromechanical transducer that converts an electrical signal to sound. The term loudspeaker can refer to individual devices (otherwise known as drivers), or to complete systems consisting of an enclosure incorporating one or more drivers and additional electronic components. Loudspeakers, as with other electro-acoustic transducers, are the most variable elements in an audio system and are responsible for the greatest degree of audible differences between sound systems.
To reproduce a wide range of frequencies, most loudspeaker systems require more than one driver, particularly for high sound pressure level or high fidelity applications. Individual drivers are used to cover different frequency ranges. The drivers are named subwoofers, for very low frequencies; woofers, for low frequencies; mid-range speakers, for middle frequencies; tweeters, for high frequencies; and, also, the so-called super-tweeters, which are basically tweeters optimized for higher frequencies than a normal tweeter.












COMPUTER MEMORY

INTRODUCTION:
Computer Memory, a mechanism that stores data for use by a computer. In a computer all data consist of numbers. A computer stores a number into a specific location in memory and later fetches the value. Most memories represent data with the binary number system. In the binary number system, numbers are represented by sequences of the two binary digits 0 and 1, which are called bits (see Number Systems). In a computer, the two possible values of a bit correspond to the on and off states of the computer's electronic circuitry.
In memory, bits are grouped together so they can represent larger values. A group of eight bits is called a byte and can represent decimal numbers ranging from 0 to 255. The particular sequence of bits in the byte encodes a unit of information, such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes. Although the prefixes kilo-, mega-, and giga-, are taken from the metric system, they have a slightly different meaning when applied to computer memories. In the metric system, kilo- means 1 thousand; mega-, 1 million; and giga-, 1 billion. When applied to computer memory, however, the prefixes are measured as powers of two, with kilo- meaning 2 raised to the 10th power, or 1,024; mega- meaning 2 raised to the 20th power, or 1,048,576; and giga- meaning 2 raised to the 30th power, or 1,073,741,824. Thus, a kilobyte is 1,024 bytes and a megabyte is 1,048,576 bytes. It is easier to remember that a kilobyte is approximately 1,000 bytes, a megabyte is approximately 1 million bytes, and a gigabyte is approximately 1 billion bytes.
HOW MEMORY WORKS?
Computer memory may be divided into two broad categories known as internal memory and external memory. Internal memory operates at the highest speed and can be accessed directly by the central processing unit (CPU)—the main electronic circuitry within a computer that processes information. Internal memory is contained on computer chips and uses electronic circuits to store information (see Microprocessor). External memory consists of storage on peripheral devices that are slower than internal memories but offer lower cost and the ability to hold data after the computer’s power has been turned off. External memory uses inexpensive mass-storage devices such as magnetic hard drives.
Internal memory is also known as random access memory (RAM) or read-only memory (ROM). Information stored in RAM can be accessed in any order, and may be erased or written over. Information stored in ROM may also be random-access, in that it may be accessed in any order, but the information recorded on ROM is usually permanent and cannot be erased or written over.


PRIMARY MEMORIES:

Internal RAM:
Random access memory is also called main memory because it is the primary memory that the CPU uses when processing information. The electronic circuits used to construct this main internal RAM can be classified as dynamic RAM (DRAM), synchronized dynamic RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all involve different ways of using transistors and capacitors to store data. In DRAM or SDRAM, the circuit for each bit consists of a transistor, which acts as a switch, and a capacitor, a device that can store a charge. To store the binary value 1 in a bit, DRAM places an electric charge on the capacitor. To store the binary value 0, DRAM removes all electric charge from the capacitor. The transistor is used to switch the charge onto the capacitor. When it is turned on, the transistor acts like a closed switch that allows electric current to flow into the capacitor and build up a charge. The transistor is then turned off, meaning that it acts like an open switch, leaving the charge on the capacitor. To store a 0, the charge is drained from the capacitor while the transistor is on, and then the transistor is turned off, leaving the capacitor uncharged. To read a value in a DRAM bit location, a detector circuit determines whether a charge is present or absent on the relevant capacitor.
DRAM is called dynamic because it is continually refreshed. The memory chips themselves cannot hold values over long periods of time. Because capacitors are imperfect, the charge slowly leaks out of them, which results in loss of the stored data. Thus, a DRAM memory system contains additional circuitry that periodically reads and rewrites each data value. This replaces the charge on the capacitors, a process known as refreshing memory. The major difference between SDRAM and DRAM arises from the way in which refresh circuitry is created. DRAM contains separate, independent circuitry to refresh memory. The refresh circuitry in SDRAM is synchronized to use the same hardware clock as the CPU. The hardware clock sends a constant stream of pulses through the CPU’s circuitry. Synchronizing the refresh circuitry with the hardware clock results in less duplication of electronics and better access coordination between the CPU and the refresh circuits.
In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value without the need for refresh. The chief advantage of SRAM lies in its speed. A computer can access data in SRAM more quickly than it can access data in DRAM or SDRAM. However, the SRAM circuitry draws more power and generates more heat than DRAM or SDRAM. The circuitry for a SRAM bit is also larger, which means that a SRAM memory chip holds fewer bits than a DRAM chip of the same size. Therefore, SRAM is used when access speed is more important than large memory capacity or low power consumption.
The time it takes the CPU to transfer data to or from memory is particularly important because it determines the overall performance of the computer. The time required to read or write one bit is known as the memory access time. Current DRAM and SDRAM access times are between 30 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM.
The internal RAM on a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers a memory address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes, or in which a byte consists of six or ten bits.
When a computer performs an arithmetic operation, such as addition or multiplication, the numbers used in the operation can be found in memory. The instruction code that tells the computer which operation to perform also specifies which memory address or addresses to access. An address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits in the memory use the address to select the bits at the specified location in RAM and send a copy of the data back to the CPU over another set of wires called a data bus. Inside the CPU, the data passes through circuits called the data path to the circuits that perform the arithmetic operation. The exact details depend on the model of the CPU. For example, some CPUs use an intermediate step in which the data is first loaded into a high-speed memory device within the CPU called a register.

Internal ROM
Read-only memory is the other type of internal memory. ROM memory is used to store items that the computer needs to execute when it is first turned on. For example, the ROM memory on a PC contains a basic set of instructions, called the basic input-output system (BIOS). The PC uses BIOS to start up the operating system. BIOS is stored on computer chips in a way that causes the information to remain even when power is turned off.
Information in ROM is usually permanent and cannot be erased or written over easily. A ROM is permanent if the information cannot be changed—once the ROM has been created, information can be retrieved but not changed. Newer technologies allow ROMs to be semi-permanent—that is, the information can be changed, but it takes several seconds to make the change. For example, a FLASH memory acts like a ROM because values remain stored in memory, but the values can be changed.
In its strictest sense, ROM refers only to mask ROM (the oldest type of solid state ROM), which is fabricated with the desired data permanently stored in it, and thus can never be modified. However, more modern types such as EPROM and flash EEPROM can be erased and re-programmed multiple times; they are still described as "read-only memory" because the reprogramming process is generally infrequent, comparatively slow, and often does not permit random access writes to individual memory locations, which are possible when reading a ROM. Despite the simplicity of mask ROM, economies of scale and field-programmability often make reprogrammable technologies more flexible and inexpensive, so that mask ROM is rarely used in new products as of 2007.
· PROGRAMMABLE READ-ONLY MEMORY: A programmable read-only memory (PROM) or field programmable read-only memory (FPROM) is a form of digital memory where the setting of each bit is locked by a fuse or antifuse. Such PROMs are used to store programs permanently. The key difference from a strict ROM is that the programming is applied after the device is constructed. They are frequently seen in video game consoles, or such products as electronic dictionaries, where PROMs for different languages can be substituted.
· EPROM: An EPROM, or Erasable Programmable Read-Only Memory, is a type of computer memory chip that retains its data when its power supply is switched off. In other words, it is non-volatile. It is an array of floating-gate transistors individually programmed by an electronic device that supplies higher voltages than those normally used in electronic circuits. Once programmed, an EPROM can be erased only by exposing it to strong ultraviolet light. That UV light usually has a wavelength of 235nm (for optimum erasure time) and belongs to the UVC range of UV light. EPROMs are easily recognizable by the transparent fused quartz window in the top of the package, through which the silicon chip can be seen, and which permits UV light during erasing.
· EEPROM: EEPROM (also written E2PROM and pronounced e-e-prom or simply e-squared), which stands for Electrically Erasable Programmable Read-Only Memory, is a type of non-volatile memory used in computers and other electronic devices to store small amounts of data that must be saved when power is removed, e.g., calibration tables or device configuration. When larger amounts of static data are to be stored (such as in USB flash drives) a specific type of EEPROM such as flash memory is more economical than traditional EEPROM devices.

SECONDARY STORAGE DEVICES

(EXTERNAL MEMORY)

External memory can generally be classified as either magnetic or optical, or a combination called magneto-optical. A magnetic storage device, such as a computer's hard drive, uses a surface coated with material that can be magnetized in two possible ways. The surface rotates under a small electromagnet that magnetizes each spot on the surface to record a 0 or 1. To retrieve data, the surface passes under a sensor that determines whether the magnetism was set for a 0 or 1. Optical storage devices such as a compact disc (CD) player use lasers to store and retrieve information from a plastic disk. Magneto-optical memory devices use a combination of optical storage and retrieval technology coupled with a magnetic medium.

Magnetic Media:
Memory stored on external magnetic media include magnetic tape, a hard disk, and a floppy disk. Magnetic tape is a form of external computer memory used primarily for backup storage. Like the surface on a magnetic disk, the surface of tape is coated with a material that can be magnetized. As the tape passes over an electromagnet, individual bits are magnetically encoded. Computer systems using magnetic tape storage devices employ machinery similar to that used with analog tape: open-reel tapes, cassette tapes, and helical-scan tapes (similar to video tape).
Another form of magnetic memory uses a spinning disk coated with magnetic material. As the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans across the surface of the disk, reading and writing magnetic spots in concentric circles called tracks.
Magnetic disks are classified as either hard or floppy, depending on the flexibility of the material from which they are made. A floppy disk is made of flexible plastic with small pieces of a magnetic material imbedded in its surface. The read-write head touches the surface of the disk as it scans the floppy. A hard disk is made of a rigid metal, with the read-write head flying just above its surface on a cushion of air to prevent wear.

Optical Media:
Optical external memory uses a laser to scan a spinning reflective disk in which the presence or absence of nonreflective pits in the disk indicates 1s or 0s. This is the same technology employed in the audio CD. Because its contents are permanently stored on it when it is manufactured, it is known as compact disc-read only memory (CD-ROM). A variation on the CD, called compact disc-recordable (CD-R), uses a dye that turns dark when a stronger laser beam strikes it, and can thus have information written permanently on it by a computer.

Magneto-Optical Media:
Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface of the disk heating it up slightly. This allows the magnetic write-head to change the physical orientation of small grains of magnetic material (actually tiny crystals) on the surface of the disk. These tiny crystals reflect light differently depending on their orientation. By aligning the crystals in one direction a 0 can be stored, while aligning the crystals in the opposite direction stores a 1. Another, separate, low-power laser is used to read data from the disk in a way similar to a standard CD-ROM. The advantage of MO disks over CD-ROMs is that they can be read and written to. They are, however, more expensive than CD-ROMs and are used mostly in industrial applications. MO devices are not popular consumer products.

Cache Memory:
CPU speeds continue to increase much more rapidly than memory access times decrease. The result is a growing gap in performance between the CPU and its main RAM memory. To compensate for the growing difference in speeds, engineers add layers of cache memory between the CPU and the main memory. A cache consists of a small, high-speed memory system that holds recently used values. When the CPU makes a request to fetch or store a memory value, the CPU sends the request to the cache. If the item is already present in the cache, the cache can honor the request quickly because the cache operates at higher speed than main memory. For example, if the CPU needs to add two numbers, retrieving the values from the cache can take less than one-tenth as long as retrieving the values from main memory. However, because the cache is smaller than main memory, not all values can fit in the cache at one time. Therefore, if the requested item is not in the cache, the cache must fetch the item from main memory.
Cache cannot replace conventional RAM because cache is much more expensive and consumes more power. However, research has shown that even a small cache that can store only 1 percent of the data stored in main memory still provides a significant speedup for memory access. Therefore, most computers include a small, external memory cache attached to their RAM. More important, multiple caches can be arranged in a hierarchy to lower memory access times even further. In addition, most CPUs now have a cache on the CPU chip itself. The on-chip internal cache is smaller than the external cache, which is smaller than RAM. The advantage of the on-chip cache is that once a data item has been fetched from the external cache, the CPU can use the item without having to wait for an external cache access.


Examples of some SECONDARY STORAGE DEVICES
( Detailed explanation):
COMPACT DISC:




Media type: Optical disc
Encoding: Various
Capacity: Typically up to 700 MB
Read mechanism: 780 nm wavelength semiconductor laser
Developed by: Philips & Sony Usage Audio and data storage

A Compact Disc (or CD) is an optical disc used to store digital data, originally developed for storing digital audio. The CD, available on the market since late 1982, remains the standard playback medium for commercial audio recordings to the present day.
Standard CDs have a diameter of 120 mm and can hold up to 80 minutes of audio. There is also the Mini CD, with diameters ranging from 60 to 80 mm; they are sometimes used for CD singles, storing up to 24 minutes of audio.
The technology was later adapted and expanded to include data storage (CD-ROM), write-once audio and data storage (CD-R), rewritable media (CD-RW), SACD, VCD, SVCD, PhotoCD, Picture CD, CD-i, and Enhanced CD. CD-ROMs and CD-Rs remain widely used technologies in the computer industry. The CD and its extensions have been extremely successful: in 2004, worldwide sales of CD audio, CD-ROM, and CD-R reached about 30 billion discs.[1] By 2007, 200 billion CDs had been sold worldwide.

CD-ROM:
CD-ROM, in computer science, acronym for compact disc read-only memory, a rigid plastic disk that stores a large amount of data through the use of laser optics technology. Because they store data optically, CD-ROMs have a much higher memory capacity than computer disks that store data magnetically. However, CD-ROM drives, the devices used to access information on CD-ROMs, can only read information from the disc, not write to it.
The underside of the plastic CD-ROM disk is coated with a very thin layer of aluminum that reflects light. Data is written to the CD-ROM by burning microscopic pits into the reflective surface of the disk with a powerful laser. The data is in digital form, with pits representing a value of 1 and flat spots, called land, representing a value of 0. Once data is written to a CD-ROM, it cannot be erased or changed, and this is the reason it is termed read-only memory. Data is read from a CD-ROM with a low power laser contained in the drive that bounces light—usually infrared—off of the reflective surface of the disk and back to a photo detector. The pits in the reflective layer of the disk scatter light, while the land portions of the disk reflect the laser light efficiently to the photo detector. The photo detector then converts these light and dark spots to electrical impulses corresponding to 1s and 0s. Electronics and software interpret this data and accurately access the information contained on the CD-ROM.
CD-ROMs can store large amounts of data and so are popular for storing databases and multimedia material. The most common format of CD-ROM holds approximately 630 megabytes (see Byte). By comparison, a regular floppy disk holds approximately 1.44 megabytes.
CD-ROMs and Audio CDs are almost exactly alike in structure and data format. The difference between the two lies in the device used to read the data—either a CD-ROM player or a compact disc (CD) player. CD-ROM players are used almost exclusively as computer components or peripherals. They may be either internal (indicating they fit into a computer’s housing) or external (indicating they have their own housing and are connected to the computer via an external port).
Both types of players spin the discs to access data as they read the data with a laser device. CD-ROM players only spin the disc to access a sector of data and copy it into main memory for use by the computer, while audio CDs spin throughout the time that the audio recording is read out, directly feeding the signal to an audio amplifier.
The most important distinguishing feature among CD-ROM players is their speed, which indicates how fast they can read data from the disc. A single-speed CD-ROM player reads 150,000 bytes of data per second. Double-speed (2X), triple-speed (3X), quadruple-speed (4X), six-times speed (6X), and eight-times speed (8x) CD-ROM players are also widely available.
Other important characteristics of CD-ROM players are seek time and data transfer rate. The seek time (also called the access time) measures how long it takes for the laser to access a particular segment of data. A typical CD-ROM takes about a third of a second to access data, as compared to a typical hard drive, which takes about 10 milliseconds (thousandths of a second) to access data. The data transfer rate measures how quickly data is transferred from the disk media to the computer’s main memory.
The computer industry also manufactures blank, recordable compact discs, called CD-Rs (compact disc-recordable), that users can record data onto for one-time, permanent storage using CD-R drives. Compact disc-rewriteable (CD-RWs) are similar to CD-Rs, but can be erased and rewritten multiple times. Another technology that allows the user to write to a compact disc is the magneto-optical (MO) disk, which combines magnetic and optical data storage. Users can record, erase, and save data to these disks any number of times using special MO drives.


CD-R:


A CD-R (Compact Disc-Recordable) is a variation of the Compact Disc invented by Philips and Sony. CD-R is a Write Once Read Many (WORM) optical medium (though the whole disk does not have to be entirely written in the same session) and retains a high level of compatibility with standard CD readers (unlike CD-RW which can be rewritten but has much lower compatibility and the discs are considerably more expensive).

CD-RW:

Compact Disc ReWritable (CD-RW) is a rewritable optical disc format. Known as CD-Erasable (CD-E) during its development, CD-RW was introduced in 1997, and was preceded by the never officially released CD-MO in 1988. While a prerecorded compact disc has its information permanently written onto its polycarbonate surface, a CD-RW disc contains a phase-change alloy recording layer composed of a phase change material, most often AgInSbTe, an alloy of silver, indium, antimony and tellurium[1]. An infra-red laser beam is used to selectively heat and melt, at 400 degrees (Celsius), the crystallized recording layer into an amorphous state or to anneal it at a lower temperature back to its crystalline state. The different reflectance of the resulting areas make them appear like the pits and lands of a prerecorded CD.
CD-RW discs are usually produced in the most common CD-R disc capacities such as 650 and 700 MB, while smaller and larger capacities are rarer. CD-RW recorders typically handle the most common capacities best. In theory a CD-RW disc can be written and erased roughly 1000 times[2], although in practice this number is much lower. CD-RW recorders can also read CD-R discs. When used with traditional recording software, CD-RWs act very much like CD-Rs and are subject to the same restrictions; i.e., they can be extended, but not selectively overwritten, and writing sessions must be closed before they can be read in CD-ROM drives or players.

VIDEO CD:
Media type
Optical disc
Encoding
MPEG-1 video + audio
Capacity
Up to 800 MB
Read mechanism
780 nm wavelength semiconductor laser
Standard
White Book
Developed by
Sony & Philips
Usage
audio and video storage
Extended to
SVCD


Video CD (abbreviated as VCD, and also known as View CD, Compact Disc digital video) is a standard digital format for storing video on a Compact Disc. VCDs are playable in dedicated VCD players, most modern DVD-Video players, personal computers, and some video game consoles.
The VCD standard was created in 1993[1] [2] by Sony, Philips, Matsushita, and JVC and is referred to as the White Book standard.

DVD:
Media type
Optical disc
Capacity
~4.7 GB (single-sided single-layer), ~8.54 GB (single-sided double-layer)
Read mechanism
650 nm laser, 1350 kB/s (1×)
Write mechanism
1350 kB/s (1×)
Usage
Data storage, video, audio, games











DVD (also known as "Digital Versatile Disc" or "Digital Video Disc" - see Etymology) is a popular optical disc storage media format. Its main uses are video and data storage. Most DVDs are of the same dimensions as compact discs (CDs) but store more than six times as much data.
Variations of the term DVD often describe the way data is stored on the discs: DVD-ROM has data which can only be read and not written, DVD-R and DVD+R can be written once and then function as a DVD-ROM, and DVD-RAM, DVD-RW, or DVD+RW hold data that can be erased and thus re-written multiple times. The wavelength used by standard DVD lasers is 650 nm[1], and thus has a red color.
DVD-Video and DVD-Audio discs respectively refer to properly formatted and structured video and audio content. Other types of DVDs, including those with video content, may be referred to as DVD-Data discs. As next generation High definition optical formats also use a disc identical in some aspects yet more advanced than a DVD, such as Blu-ray Disc, the original DVD is often given the retronym SD DVD (for standard definition).[2][3]

MAGNETIC HARD DISC:

The hard disk plays very significant role in a number of ways. The first thing it affects the system performance. The sped at which a PC boot up and loads a program is directly related to the speed of the hard disk. The performance of hard disk it crucial when multitasking is being used or when huge amount of data (during video or audio editing).
In general terms, hard disk uses round, rugged, solid substrates called platters. Usually it is made up of an aluminum alloy, glass, or ceramic, coated on both sides with a special material designed to store information in the form of magnetic patterns. A typical hard disk uses two or more platters that are either 5.25 or 3.2inches in diameter . The platters are mounted in a stack on a spindle. The disk pack is sealed and mounded on a disk drive. Such a disk drive is also called a Winchester drive. The size of the hard disk is governed by size of the platters. The disc consist of a motor to rotate the disk pack about its axis at a speed of about 3600 revolution per minute.










The platters are mounded by cutting a hole in the center and staking them to a spindle. A typical hard disk assembly is shown in the fig 5.13. the platters rotate a high speed, driven by a special motor connected to spindle.
Each platter has two heads, one for the top surface and another for the bottom ; so the hard disc with two platters has four heads in all. The head arm assembly is capable of moving in or out in radial direction. The position of read or write heads are controlled by a device called actuator.
Each platter is capable of storing a billion or so bits per inch of data .the data is recorded on the surface of the disc on circular tracks .A set of concentric tracks is recorded on each surface. the density of recording on each track is of the order of 16KB /inch for a disc of 5.25”diameter .In order to allow easy access to information ,individual bites of are organized into larger chunks called sectors . In most systems a sector usually contains 512 bytes of user data plus addressing information used by the drive systems.
A set of disc drives in connected to disc controller . When it comes to accessing stored data, the disc spins very fast so that any part of the disc can be quickly identified . For read out operation ,following steps are carried out;
The first step is to figure out the location of the desired data . The application , operating system, and the system BIOS do the job of what part of the disc to read . {The address of the location is expressed in terms of geometry of the hard disc }.
The disc controller decodes the address for the read operation and instructs to move the head to the selected track. Ones the head is placed on the correct track it begins to read the track, looking for the desired sector. It waits for the disc to rotate the correct sector number and then reads the contents of the sector.
In peripherals such a hard discs an interface is required to provide communication between the computer bus and hard disc. The most popular interface used in modern hard discs is the IDE interface { Integrated Drive Electronics}.The maximum capacity or the size of the hard disc is limited by the maximum bits used for addressing in the system BIOS. Initially it was 528 MB, then to 4.2 GB and most recently , to 80 GB.
MAGNETIC TAPE:

Magnetic tape storage was an important method of storing large amounts of digital data at a low cost per bit. The space requirement for storing the was the limitation . Still magnetic tape is the most common medium to store voluminous data.




The magnetic tape is manufactured out of Mylar plastic base with a thin coating of ferromagnetic material. It has 0.5 inch width and a thickness of 0.002 inch. Cassette tapes have a width of 0.25 inch ,used in small cassettes for portable recorders ,to the 2 to 3 inch width commonly used with large mainframes. Common length of the tape is 2400 ft;1200 ft;600ft etc, wound on a reel. Recording density of about 800/1600/6250 bpi(bites pen inch) is normally used . Generally there can be several tracks. Each track can, for example , record 1 bit pattern out of nine _ bit bytes (8 information bytes and one check bit) written in parallel. Since all bites of a byte are written in parallel, number of bytes recorded per inch.
In the “write” mode the current the creates magnetic flux in the iron to spill out and magnetize the tape remains magnetized until altered . The polarity of the magnetized region on the tape is determined by the direction of current flow can de represented by a “ONE” and the other a “ZERO”.
Data can be read from the tape with the same head. When magnetized region of the tape is passed over the gape in the head , a voltage, typically a few millivolts is induced in the coil . The polarity of the voltage corresponds to the magnetized region read. An amplifier is used to amplify the data to the required logic level.

Advantages & Disadvantages:
THE main advantage of rape storage system is that a very large number of bits can be stored at a very low cost per bit. Tape storage is non volatile and the tape can be erased and reused many times.
The main disadvantages is the large access time. The data can be retrieved only serially. A slow system take several seconds to locate a file. Another disadvantages is that the huge mechanical system required to keep the tape speed constants, and start and stop the tape without breaking it.








COMPUTER LANGUAGES:

INTRODUCTION:

Programming is a critical step in data processing. Programming language is a means of communication between the programmer and computer. The process of writing program instructions for an analyzed problem is called coding. A computer executes programs only after they are represented internally in the binary form. Programs written in any other language must be translated into binary language that can be executed by the computer. Different programming languages serve different needs. Programs written for computers may be classified mainly I the following categories of languages.
· MACHINE LANGUAGE
· ASEMBLY LANGUAGE
· HIGH LEVEL LANGUAGE

MACHINE LANGUAGE:

Machine code or machine language is a low level language and is considered to be a first generation language. I t consists of strings of 0’s and 1’s and is the only one language that can be directly executed by the computer .It is the most basic form of programming and hence explicit instructions are to be given to the machine to perform every operation. An instruction prepared in machine language will usually have two parts:
Every CPU model has its own machine code, or instruction set. Successor or derivative processor designs may completely include all the instructions of a predecessor and may add additional instructions. Some nearly completely compatible processor designs may have slightly different effects after similar instructions. Occasionally a successor processor design will discontinue or alter the meaning of a predecessor's instruction code, making migration of machine code between the two processors more difficult. Even if the same model of processor is used, two different systems may not run the same example of machine code if they differ in memory arrangement, operating system, or peripheral devices because the machine code has no embedded information about the configuration of the system.
· Operation Code/”OP-Code-It tells the computer what operation are to be performed.
· Operand- It tells the computer where to find or store the data for manipulation

ADVANTAGES:

1. It is faster in execution.
2. It makes efficient use of storage.
3. It can be used to manipulate the individual bits in byte of computer storage.

DISADVANTAGES:

1. It is machine dependent, i.e. programs written in one machine cannot be used in another machine with different hardware organization.
2. It is difficult to understand and develop a program using machine language, because the machine language programmer does not know the architecture f the computer.
3. There are chances for frequent errors in machine language programs as they contain only 0’s an1’s.
4. It is difficult to correct and modify machine language programs, because the programmer must know the binary code for each instruction in the instruction set.
5. It requires high level of programming skill and concentration.
6. It is tedious and time consuming.

ASSEMBLY LANGUAGE:

An assembly language is a second generation low level language for programming computers. It uses symbolic representation of the numeric machine codes and other constants needed to program a particular CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on abbreviations that help the programmer remember individual instructions, registers, etc.
For example, ADD is used as a symbolic op-code to represent addition

ADVANTAGES:

1. It gives more readability than machine language.
2. It allows the use of symbolic register names, so it is easier to understand and use than machine language.
3. It permits the programmer to assign names to memory locations. So the programmer need not know the exact numeric location of data in memory.
4. It is easier to locate and correct errors because of use of mnemonics and symbolic field names.
5. It requires less time to program when compared to machine language.

DISADVANTAGES:

1. It is also machine dependent, i.e. programs written for one microprocessor cannot be used in another microprocessor with different architecture.
2. The programmer should have good knowledge about hardware.
3. They are les efficient than machine language because they take extra time for conversion


HIGH LEVEL LANGUAGE:
A high level language summarizes many machine language instructions into a single statement. These procedures oriented languages consist of a set of symbols and instructions and enable the programmer to write instructions using English words and familiar mathematical symbols. High level language deal with variables, arrays and complex arithmetic or Boolean expressions. Other features such as string handling routings, object oriented language features and file input/output may also be present.
Some commonly used high level languages are C, C++, C#, BASIC.

ADVANTAGES:

1. It is machine independent
2. It is easy to read and understand.
3. Fewer errors; errors can be removed easily.
4. Programs can be easily and swiftly changed or modified as per requirements.
5. It requires less time to program and maintain.
6. The cost of all phases of program preparation is lower.

DISADVANTAGES:
1. They are less efficient than machine language because they take extra time for conversion.
2. It is impractical to do programs in HLLs (except C family) to perform certain system oriented functions.
3. In general, HLLs requires the most memory.

SYSTEM SOFTWARE

The hardware of a computer system cannot do anything by itself and software is required to direct it what to do. There are commonly two types of software used in a computer system namely, system software and application software. An application program is primarily concerned with a solution of some problems, using the computer as a tool: the focus is on the application, not the computing system. Examples: MS Word, FoxPro, Matlab etc. System software, on the other hand, are intended to support the operation and use of the computer itself, rather than any particular application; for this reason, they are usually related to the structure of the machine on which they are to run. In some cases they do not depend upon the type of computing system being supported. The major functions performed by system software are,

· Receiving and interpreting user commands.
· Entering and editing application programs and storing them as files in secondary storage
· Managing the storage and retrieval of files in secondary storage.
· Running standard application programs
· Controlling input/Output units
· Linking and running user-written programs.
· Translating programs from source code prepared by the user into object form consisting of machine instruction.

In short system software is responsible for the coordination of all activities in a computing system. Some system software programs are compilers, assemblers, loaders and linkers, operating systems, DBMS, Text editors etc.

ELEMENTS OF SYSTEM SOFTWARE:
Operating System
Program development software
(Executive scheduler, Interpreter, Driver)
Editor, Loader, Assembler, Compiler, Debugger
Hardware
Application software
User

OPERATING SYSTEM

A general-purpose operating system is a program, which makes the computer easier to use. The operating system manages the resources of the computer in an attempt to meet overall system goals such as efficiency. The Operating system acts as an interaction between the hardware and the software, which oversees all the operations of the computer. The OS allows a user to create, print, copy, delete, display and in other ways to work with files. It also allows a user to load and execute other programs. The OS insulates the user from needing to know the intricate hardware details of the system in order to use it.
The functional relationship between the OS and the hardware of the computer is shown in figur
Microprocessor
Operating system
Output Devices
Secondary Memory: Disk, CD
Memory: RAM, ROM
Input Devicese.
























It shows the relationship and the hierarchy among the hardware, OS and the high level languages and application programs. The OS is closest to the hardware and application programs are farthest from the hardware. When the system is turned on the OS is in charge of the system and stays in the background and provides channels of communications to application programs.
OS are classified based on different ways; the most common way is based on the kind of user interface provided. The user interface is meant to serve the needs of the various groups of people who must deal with the computer. Another way of classifying OS is concerned with the number of users the system can support at one time. The system is referred to as Multiprogramming, in which the OS takes care of switching the CPU among the various user jobs. A multiprocessing system is similar to a multiprogramming system except there is more than one CPU available. The multiprogramming improves the performance of a system by allowing the resources to be shared among several jobs. OS are also classified by the types of access provided to a user. In a batch processing system, a job is described by a sequence of control statements stored in a machine-readable form. A time-sharing system provides interactive or conversational access to a number of users. A real time system is designed to respond quickly to external signals such as those generated by data sensors.
Each computer has its own OS. In the 1970s when most computers were designed using 8-bit microprocessors, the CP/M (Control Program/Monitor) OS was in common use. 1980s began the era of Personal Computers based on 16-bit microprocessors. The CP/M was replaced by MS-DOS which is almost similar to CP/M. In the 1990s 32-bit processor were being used widely in microcomputers and therefore MS-DOS is being replaced gradually by new OS such as Microsoft Windows, IBM OS/2, UNIX and LINUX.
MS-DOS is a 16-bit OS developed by Microsoft. Windows is a powerful OS developed by Microsoft, which is widely used in Computers all over the world. It is a 32-bit graphical user interface OS. OS/2 is a 32-bit single user, multitasking OS designed by IBM to exploit powerful features of recent 32-bit (and 64-bit) microprocessors. It is compatible with DOS and Window application programs. It is well suited for Multimedia applications of CDROM and supports various telecommunication features such as E-mail, Fax and Telephone voicemail. UNIX is a multi user, multi tasking OS. It is independent of any particular hardware structure and is widely used in Engineering, Scientific and Research environments. It is well suited for Networking and Graphical environments and is not limited by any memory constraints. LINUX is very rugged and a very stable OS developed by Linus Torvalds. It is gaining popularity day by day because of its many features. It can run on most common desktop and network platforms. LINUX with its power and performance is gradually outshining the Windows OS. It is still evolving and improving. Major IT –players are extending their support to the LINUX platform.

COMPILER AND INTERPRETER:

Programs, which need to do a lot of bit fiddling and hardware manipulation, are usually written in assembly language because this level gives direct hardware control. However, business, scientific and other programs that involve mostly manipulating large amounts of data are usually written in a higher language such as BASIC, Pascal, C etc. Instructions written in these languages are known, as statements rather than mnemonics and they are machine independent. To convert high-level language programs into machine code, we make use of another program called either a compiler or an interpreter.
The following figure shows in flow chart form how an interpreter executes a high-level language statement of the source program, translates it into machine code and if it doesn’t need information from another instruction, executes the code for that statement immediately. It then reads the next high level language source statement, translates it and executes it. BASIC programs are often executed in this way. The advantage of using an interpreter is that if an error is found you can just correct the source program and immediately return it. The disadvantage of the interpreter approach is that an interpreter program runs five to twenty five times slower than the same program will run after being compiled. This is because of the translation time required for each statement every time the program is run.

Start
Create source program
Read a source statement
Translate statement for machine code
Execute statement
Last statement
Yes
No
Stop

































The following figure shows how a compiler fits the translation-execution process. A compiler program reads through the entire high language source program and in two or more passes through it, translate it all to a relocatable machine code version. Before the program can be run, however, this relocatable object code version must be linked with any other required object modules from the system library, a user library, or assembly language procedures. The output file from the linker is then located which means that it is given absolute address so that it can be loaded into memory. Some systems, incidentally, combine two or more of the link, locate and load functions in a single program. Once the located program is loaded into memory, the entire program can be run without any further translation. Therefore, it will run much faster than an interpreter would execute it. The major disadvantage of the compiler approach is that, when an error is found, it usually must be corrected in the source program and the entire compile-load sequence repeated. Calling assembly language procedures from an interpreted high language is quite messy, because of the way the interpreter uses memory. But calling assembly language procedures in compiled programs is much simpler, because the object modules produced by the assembler can be simply linked with object modules produced by the compiler and object modules from libraries.


Start
Create source program
Compile to relocatable machine code
Locate
Execute entire program
Stop
Link





























ASSEMBLER:

An assembler is a program that translates source code or assembly language mnemonics to the correct binary code, called OBJECT code. While using an assembler, certain directions need to be given to the system. These directions are called Assembler directives or Psuedo instructions. These statements are not translated into machine instructions during assembly. Instead, they provide the instruction to the assembler itself. Some of the important assembler directives are

ORG – ORG 5000 means that the next block of instruction should be stored in memory location starting at 5000.
END – End of Assembly
EQU – For e.g., RAM EQU 50. EQU is used to assign a name to a constant. Here, RAM is assigned a value 50 so that whenever RAM is found in the program, assembler assigns a value 50 to it.

The assembler features are the following:
1. It translates mnemonics into binary code with speed and accuracy.
2. The assembler assigns appropriate value to the symbols used in a program. This facilitates specifying jump locations.
3. It is easy to insert or delete instruction in a program; the assembler can reassemble the entire program quickly with new memory locations and modified addresses for jump locations. This avoids rewriting the program manually.
4. The assembler checks syntax errors, such as wrong labels and expressions and provides error messages. However, it cannot check logic errors in a program
5. The assembler can reserve memory locations for data or results.
6. A debugger program can be used in conjunction with the assembler to test and debug an assembly language program.

The assembler will read the source file of the program from the disk where you saved it after editing. An assembler usually reads the source file more than once (two pass assembler). On the first pass through the source program, the assembler finds everything. It determines the displacement of named data items and offset of labels, and puts this information in a symbol table. On a second pass through the source program the assembler produces the binary code for each instruction and assigns addresses to each. Thus the assembler generates two files accordingly. The first file is called the OBJECT file, which contains the binary codes of the instructions and information about the addresses of the instructions. This files contains the information that will eventually be loaded into memory and executed. The second file generated by the assembler is called the ASSEMBLER LIST FILE. This file contains the assembly language statements, the binary codes for each instruction, and the offset for each instruction. This file is usually sent to printer for a print out. The assembler listing will also indicate any typing or syntax errors you made in typing the source program.

No comments: