THOUSANDS OF FREE BLOGGER TEMPLATES

Thursday, June 25, 2009

Hardware Protection

HARDWARE PROTECTION

Dual-Mode Operation

•Sharing system resources requires operating system to ensure
that an incorrect program cannot cause other programs to
execute incorrectly.

•Provide hardware support to differentiate between at least two
modes of operations.

1. User mode – execution done on behalf of a user.

2. Monitor mode (also supervisor mode or system mode) –
execution done on behalf of operating system.
Operating System Concepts 2.12
Silberschatz and Galvin c 1998

· Mode bit added to computer hardware to indicate the current
mode: monitor (0) or user (1).
· When an interrupt or fault occurs hardware switches to monitor

mode
user
monitor
interrupt/fault
set user mode

· Privileged instructions can be issued only in monitor mode.
Operating System Concepts 2.13
Silberschatz and Galvin c 1998

I/O Protection

•All I/O instructions are privileged instructions.
•Must ensure that a user program could never gain control of
the computer in monitor mode (i.e., a user program that, as
part of its execution, stores a new address in the interrupt
vector).
Operating System Concepts
2.14
Silberschatz and Galvin c 1998

Memory Protection

•Must provide memory protection at least for the interrupt vector
and the interrupt service routines.
•In order to have memory protection, add two registers that
determine the range of legal addresses a program may access:
– base register – holds the smallest legal physical memory
address.
– limit register – contains the size of the range.
•Memory outside the defined range is protected.
Operating System Concepts 2.15
Silberschatz and Galvin c 1998

CPU protection

The CPU protection feature enhances the efficiency of an HP device’s CPU and Content Addressable Memory
(CAM). Some denial of service attacks make use of spoofed IP addresses. If the device must create CAM entries for a large number of spoofed IP addresses over a short period of time, it requires excessive CAM utilization. Similarly, if an improperly configured host on the network sends out a large number of packets that are normally processed by the CPU (for example, DNS requests), it requires excessive CPU utilizationThe CPU protection feature allows you to configure the HP device to automatically take actions when thresholds related to high CPU or CAM

10. Storage Hierarchy
The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. It is designed to take advantage of memory locality in computer programs. Each level of the hierarchy has the properties of higher bandwidth, smaller size, and lower latency than lower levels.
Most modern
CPUs are so fast that for most program workloads, the locality of reference of memory accesses and the efficiency of the caching
and memory transfer between different levels of the hierarchy are the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. This is sometimes called the space cost, as a larger memory object is more likely to overflow a small/fast level and require use of a larger/slower level.

  • Cache
    In computer science, a cache (pronounced /kæʃ/) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache. In other words, a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data.
    A cache has proven to be extremely effective in many areas of computing because access patterns in typical computer applications have locality of reference. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time (temporal locality). The data might or might not be located physically close to each other (spatial locality).
  • Coherency and consistency
    Cache coherency problems can arise when more than one processor refers to the same data. Assuming each processor has cached a piece of data, what happens if one processor modifies its copy of the data? The other processor now has a stale copy of the data in its cache.
    Cache coherency and consistency define the action of the processors to maintain coherence. More precisely, coherency defines what value is returned on a read, and consistency defines when it is available.
    Unlike other Cray systems, cache coherency on Cray X1 systems is supported by a directory-based hardware protocol. This protocol, together with a rich set of synchronization instructions, provides different levels of memory consistency.
    Processors may cache memory from their local node only; references to memory on other nodes are not cached. However, while only local data is cached, the entire machine is kept coherent in accordance with the memory consistency model. Remote reads will obtain the latest “dirty” data from another processor's cache, and remote writes will update or invalidate lines in another processor's cache. Thus, the whole machine is kept coherent.



6. Device Status Table






-> Device-Status Table contains entry for each I/O deviceindicating its type, address, and state.A diagram of a Device-Status Table:->


Wednesday, June 24, 2009

9. Magnetic Disk

Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2009, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference.

4. User Mode

User mode is one of two distinct execution modes for the CPU (central processing unit) in Linux.
It is a non-privileged mode in which each process (i.e., a running instance of a program) starts out. It is non-privileged in that it is forbidden for processes in this mode to access those portions of memory (i.e., RAM) that have been allocated to the kernel or to other programs. The kernel is not a process, but rather a controller of processes, and it alone has access to all resources on the system.
When a user mode process (i.e., a process currently in user mode) wants to use a service that is provided by the kernel (i.e., access system resources other than the limited memory space that is allocated to the user program), it must switch temporarily into kernel mode, which has root (i.e., administrative) privileges, including root access permissions (i.e., permission to access any memory space or other resources on the system). When the kernel has satisfied the process's request, it restores the process to user mode.
This change in mode is termed a mode switch, which should not be confused with a context switch (i.e., the switching of the CPU from one process to another). The standard procedure to switch from user mode to kernel mode is to call the 0x80 software interrupt.
An interrupt is a signal to the operating system that an event has occurred, and it results in changes in the sequence of instructions that is executed by the CPU. In the case of a hardware interrupt, the signal originates from a hardware device such as a keyboard (e.g., when a user presses a key), mouse or system clock (a circuit that generates pulses at precise intervals that are used to coordinate the computer's activities). A software interrupt is an interrupt that originates in software, usually by a program in user mode.

Tuesday, June 23, 2009

Storage Structure

STORAGE STRUCTURE

  • Main memory
Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Another term for main memory is RAM. The computer can manipulate only data that is in main memory. Therefore, every program you execute and every file you access must be copied from a storage device into main memory. The amount of main memory on a computer is crucial because it determines how many programs can be executed at one time and how much data can be readily available to a program. Because computers often have too little main memory to hold all the data they need, computer engineers invented a technique called swapping, in which portions of data are copied into main memory as they are needed. Swapping occurs when there is no room in memory for needed data. When one portion of data is copied into memory, an equal-sized portion is copied (swapped) out to make room. Now, most PCs come with a minimum of 32 megabytes of main memory. You can usually increase the amount of memory by inserting extra memory in the form of chips.



  • Magnetic Disk
Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2009, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference.
@ Moving head mechanism



  • Magnetic Tape

    • Early secondary-storage medium of choice
    • Persistent, inexpensive, and has large data capacity
    • Very slow access due to sequential nature
    • Used for backup and for storing infrequently-used data
    • Kept on spools
    • Transfer rates comparable to disk if read write head is positioned to the data
    • 20-200GB are typical storage capacities


7. Difference of Ram and DRAM

RAM (Random Access Memory) is a generic name for any sort of read/write memory that can be, well, randomly accessed. All computer memory functions as arrays of stored bits, "0" and "1", kept as some kind of electrical state. Some sorts support random access, others (such as the flash memory used in MP3 players and digital cameras) has a serial nature to it. A CPU normally runs through a short sequence of memory locations for instructions, then jumps to another routine, jumps around for data, etc. So CPUs depend on dynamic RAM for their primary memory, since there's little or no penalty for jumping all around in such memory. There are many different kinds of RAM. DRAM is one such sort, Dynamic RAM. This refers to a sort of memory that stores data very efficiently, circuit-wise. A single transistor (an electronic switch) and a capacitor (charge storage device) store each "1" or "0". An alternate sort is called Static RAM, which usually has six transistors used to store each bit. The advantage of the DRAM is that each bit can be very small, physically. The disadvantage is that the stored charge doesn't last really long, so it has to be "refreshed" perodically. All modern DRAM types have on-board electronics that makes the refresh process pretty simple and efficient, but it is one additional bit of complexity. There are various sorts of DRAM around: plain (asynchronous) DRAM, SDRAM (synchronous, meaning all interactions are synchronized by a clock signal), DDR (double-data rate... data goes to/from the memory at twice the rate of the clock), etc. These differences are significant to hardware designers, but not usually a big worry for end-users... other than ensuring you buy the right kind of DRAM, if you plan to upgrade your system.

6. Direct Memory Access (DMA)

A feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.
Without DMA, using
programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput

2. What is difference between a trap and interrupt and their use?

An interrupt is an event in hardware that triggers the processor to jump from its current program counter to a specific point in the code. Interrupts are designed to be special events whose occurrence cannot be predicted precisely (or at all). The MSP has many different kinds of events that can trigger interrupts, and for each one the processor will send the execution to a unique, specific point in memory. Each interrupt is assigned a word long segment at the upper end of memory. This is enough memory for a jump to the location in memory where the interrupt will actually be handled. Interrupts in general can be divided into two kinds- maskable and non-maskable. A maskable interrupt is an interrupt whose trigger event is not always important, so the programmer can decide that the event should not cause the program to jump. A non-maskable interrupt (like the reset button) is so important that it should never be ignored. The processor will always jump to this interrupt when it happens. Often, maskable interrupts are turned off by default to simplify the default behavior of the device. Special control registers allow non-maskable and specific non-maskable interrupts to be turned on. Interrupts generally have a "priority;" when two interrupts happen at the same time, the higher priority interrupt will take precedence over the lower priority one. Thus if a peripheral timer goes off at the same time as the reset button is pushed, the processor will ignore the peripheral timer because the reset is more important (higher priority).

A trap is usually initiated by the CPU hardware. When ever the trap condition occurs (on arithmetic overflow, for example), the CPU stops what it's doing, saves the context, jumps to the appropriate trap routine, completes it, restores the context, and continues execution. For example, if overflow traps are enabled, adding two very large integers would cause the overflow bit to be set AND the overflow trap service routine to be initiated.

3. Monitor Mode

Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

1. Bootstrap Program

In computing, booting is a bootstrapping process that starts operating systems when the user turns on a computer system.
Most computer systems can only execute code found in the memory (ROM or RAM); modern operating systems are mostly stored on hard disk drives, LiveCDs and USB flash drive. Just after a computer has been turned on, it doesn't have an operating system in memory. The computer's hardware alone cannot perform complicated actions of the operating system, such as loading a program from disk on its own; so a seemingly irresolvable paradox is created: to load the operating system into memory, one appears to need to have an operating system already installed.

Thursday, June 18, 2009

1. Whats the differnce between batch systems,multi-program system ans time-sharing.

Batch processing
Batch processing is execution of a series of programs ("jobs") on a computer without human interaction.
Batch jobs are set up so they can be run to completion without human interaction, so all input data is preselected through scripts or command-line parameters. This is in contrast to "online" or interactive programs which prompt the user for such input. A program takes a set of data files as input, process the data, and produces a set of output data files. This operating environment is termed as "batch processing" because the input data are collected into batches on files and are processed in batches by the program.

Multiprogram system
These terms often cause some confusion. Multi-programming is very common on modern computers. Put simply multi-programming means running more than one program at a time. For example I am typing the answer to your question in a Word Processor program and will cut and paste the answer into my e-mail program to send it. As both these programs are running on my PC at once this is multi-programming. Multi-access is more specialised. Perhaps the most common example you have come across is a bank's network of cash machines all connected to a central computer. As you are typing in your PIN and withdrawing cash in one town someone else may be doing the same at another branch of the bank. The computer system has to respond to both customers' requests at the same time. A system like this which responds to more than one user at a time is called a multi-access (or multi-user) system.

Time-sharing
Time-sharing is sharing a computing resource among many users by multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive.

5. Differentiate symmetric multiprocessing and assymetric miltiprocessing.

Asymmetric multiprocessing or ASMP is a type of multiprocessing supported in DEC's VMS V.3 as well as a number of older systems including TOPS-10 and OS-360. It varies greatly from the standard processing model that we see in personal computers today. Due to the complexity and unique nature of this architecture, it was not adopted by many vendors or programmers during its brief stint between 1970 - 1980.

Where as a
symmetric multiprocessor or SMP treats all of the processing elements in the system identically, an ASMP system assigns certain tasks only to certain processors. In particular, only one processor may be responsible for fielding all of the interrupts in the system or perhaps even performing all of the I/O in the system. This makes the design of the I/O system much simpler, although it tends to limit the ultimate performance of the system. Graphics cards, physics cards and cryptographic accelerators which are subordinate to a CPU in modern computers can be considered a form of asymmetric multiprocessing.[citation needed] SMP is extremely common in the modern computing world, when people refer to "multi core" or "multi processing" they are most commonly referring to SMP.

1. Define the essential properties of the following types of OS:
a. Batch

Batch opersting systemSome computer systems, especially some of the early ones, only did one thing at a time. They had a list of instructions to carry out - and these would be carried out, one after the other. This is called a serial system.Sometimes, if there was a lot of work to be done, then collections of these instructions would be given to the computer to work on overnight. Because the computer was working on batches of instructions the type of operating system was called a Batch Operating System.Batch operating systems are good at churning through large numbers of repetitive jobs on large computers. Jobs like: working out the pay of each employee in large firm; or processing all the questionnaire forms in a large survey.
b. Time sharing

TSOS stands for Time Sharing Operating System; it was an operating system for RCA (Radio Corporation of America) mainframes of the RCA Spectra 70 series.RCA was in the computer business until 1971. Then it was sold to Sperry Corporation; Sperry offered TSOS renaming it to VS/9. In the mid seventies, an enhanced version of TSOS was offered by the German company Siemens and was called BS2000 here.While Sperry (respectively Univac after the company was renamed) discontinued VS/9 in the early 80's, BS2000, now called BS2000/OSD is still offered by Fujitsu Siemens Computers and used on their mainframe customers primarily in Europe.TSOS was the first operating system that supported virtual addressing of the main storage. Beyond that it provided a unique user interface for both, time sharing and batch which was a big advantage over IBM's OS/360 or their successors MVS, OS/390 and z/OS as it simplified the operation.
c. Real time

Real-time operating systemsReal-time operating systems (RTOS) are used to control machinery, scientific instruments, and industrial systems. In general, the user does not have much control over the functions performed by the RTOS
d. Network

A network operating system (NOS) is a computer operating system that is designed for network use.
Usually a NOS is a complete operating system with file, task and job management. However, with some earlier operating systems, it was a separate component that enhanced a basic, non-networking operating system by adding networking capabilities. Examples include Novell's Netware and Artisoft's LANtastic.
A
server-based network operating system provides networking support for multiple simultaneous users, each with the ability to access network resources, as well as security and other administrative functions.
Network operating systems, in the first sense, have existed for more than 35 years. In particular,
UNIX was designed from the beginning to support networking, and all of its descendants (i.e., Unix-like operating systems) including Linux and Mac OSX, feature built-in networking support.
The Microsoft Windows operating systems did not initially support networking. Thus, Novell NetWare was introduced and became the first popular network operating system for personal computers. Windows 95 and Windows for Workgroups were Microsoft's first network operating system products.
Today, almost every consumer operating system qualifies as a NOS. This is in large part due to the popularity of the Internet and the consequent need to support the Internet protocol suite.
In a peer-to-peer network, such as Microsoft Windows 98 or XP, in which each host can also be a server, the operating system might still be considered a network operating system, but it is more light weight than a full-blown NOS.
e. Distributed


f. Handheld


.

6. Differentiate Client server system and peer-to-peer system.
Peer-to-Peer
Peer-to-peer network operating systems allow users to share resources and files located on their computers and to access shared resources found on other computers. However, they do not have a file server or a centralized management source (See fig. 1). In a peer-to-peer network, all computers are considered equal; they all have the same abilities to use the resources available on the network. Peer-to-peer networks are designed primarily for small to medium local area networks. AppleShare and Windows for Workgroups are examples of programs that can function as peer-to-peer network operating systems.
Client/Server
Client/server network operating systems allow the network to centralize functions and applications in one or more dedicated file servers (See fig. 2). The file servers become the heart of the system, providing access to resources and providing security. Individual workstations (clients) have access to the resources available on the file servers. The network operating system provides the mechanism to integrate all the components of the network and allow multiple users to simultaneously share the same resources irrespective of physical location. Novell Netware and Windows 2000 Server are examples of client/server network operating systems.