Check out sample Adaface MCQ questions on OS topics
|🧐 Question||🔧 Skill||💪 Difficulty||⌛ Time|
|Fork mellow yellow - How many times will the follow||Linux||Medium||2 min|
|Remote server connection - Our software engineering inter||Linux||Medium||2 min|
Are you a candidate? Complete list of OS interview questions 👇
- What is the main purpose of an operating system?
- What are the different operating systems?
- What is a socket?
- What is a real-time system?
- What is kernel?
- What is monolithic kernel?
- What do you mean by a process?
- What are the different states of a process?
- What is the difference between micro kernel and macro kernel?
- What is the concept of reentrancy?
- What is the difference between process and program?
- What is the use of paging in operating system?
- What is the concept of demand paging?
- What is the advantage of a multiprocessor system?
- What is virtual memory?
- What is thrashing?
- What is a thread?
- What are the benefits of multithreaded programming?
- What is FCFS?
- What is SMP?
- What is deadlock? Explain.
- What are the different scheduling algorithms.
- What is Banker's algorithm?
- What is the difference between logical address space and physical address space?
- What is fragmentation?
- How many types of fragmentation occur in Operating System?
- What is spooling?
- What is the difference between internal commands and external commands?
- What is semaphore?
- What is a binary Semaphore?
- What is Belady's Anomaly?
- What is starvation in Operating System?
- What is aging in Operating System?
- What are overlays?
- What is Virtual Memory?
- What is Thrashing?
- What is time- sharing system?
- What is SMP?
- What is asymmetric clustering?
- What is RR scheduling algorithm?
- What factors determine whether a detection-algorithm must be utilized in a deadlock avoidance system?
- What is Direct Access Method?
- What is root partition?
- What are the primary functions of VFS?
- What is the purpose of an I/O status information?
- What is multitasking?
- Explain pros and cons of a command line interface?
- What is caching?
- What is an Assembler?
- What is GUI?
- What is preemptive multitasking?
- What is plumbing/piping?
- What is NOS?
- What is the Translation Lookaside Buffer (TLB)?
- What is cycle stealing?
- What is meant by arm-stickiness?
- What is page cannibalizing?
- What are the four layers that Windows NT have in order to achieve independence?
- Explain Booting the system and Bootstrap program in operating system.
- What is a command interpreter?
- What is a daemon?
- What do you mean by a zombie process?
- What do you know about a Pipe? When is it used?
- Explain PCB.
- What is context switching ?
- What is EIDE?
- Explain the execution cycle for a von Neumann architecture.
- What is DLM?
- What is RAID and what are its different levels?
- Explain the different sections of a process.
- What is IPC?
- What do you know about mutex?
- What is the reader-writer lock?
- What is compaction?
- It is designed to make sure that a computer system performs well by managing its computational activities.
- It provides an environment for the development and execution of programs.
- Batched operating systems
- Distributed operating systems
- Timesharing operating systems
- Multi-programmed operating systems
- Real-time operating systems
A socket is used to make connection between two applications. Endpoints of the connection are called socket.
Real-time system is used in the case when rigid-time requirements have been placed on the operation of a processor. It contains a well defined and fixed time constraints.
Kernel is the core and most important part of a computer operating system which provides basic services for all parts of the OS.
A monolithic kernel is a kernel which includes all operating system code is in single executable image.
An executing program is known as process. There are two types of processes:
- Operating System Processes
- User Processes
A list of different states of process:
- New Process
- Running Process
- Waiting Process
- Ready Process
- Terminated Process
Micro kernel: micro kernel is the kernel which runs minimal performance affecting services for operating system. In micro kernel operating system all other operations are performed by processor.
Macro Kernel: Macro Kernel is a combination of micro and monolithic kernel.
It is a very useful memory saving technique that is used for multi-programmed time sharing systems. It provides functionality that multiple users can share a single copy of program during the same period.
It has two key aspects:
- The program code cannot modify itself.
- The local data for each user process must be stored separately.
A program while running or executing is known as a process.
Paging is used to solve the external fragmentation problem in operating system. This technique ensures that the data you need is available as quickly as possible.
Demand paging specifies that if an area of memory is not currently being used, it is swapped to disk to make room for an application's need.
As many as processors are increased, you will get the considerable increment in throughput. It is cost effective also because they can share resources. So, the overall reliability increases.
Virtual memory is a very useful memory management technique which enables processes to execute outside of memory. This technique is especially used when an executing program cannot fit in the physical memory.
Thrashing is a phenomenon in virtual memory scheme when the processor spends most of its time in swapping pages, rather than executing instructions.
A thread is a basic unit of CPU utilization. It consists of a thread ID, program counter, register set and a stack.
It makes the system more responsive and enables resource sharing. It leads to the use of multiprocess architectur.It is more economical and preferred.
FCFS stands for First Come, First Served. It is a type of scheduling algorithm. In this scheme, if a process requests the CPU first, it is allocated to the CPU first. Its implementation is managed by a FIFO queue.
SMP stands for Symmetric MultiProcessing. It is the most common type of multiple processor system. In SMP, each processor runs an identical copy of the operating system, and these copies communicate with one another when required.
Deadlock is a specific situation or condition where two processes are waiting for each other to complete so that they can start. But this situation causes hang for both of them.
- First-Come, First-Served (FCFS) Scheduling.
- Shortest-Job-Next (SJN) Scheduling.
- Priority Scheduling.
- Shortest Remaining Time.
- Round Robin(RR) Scheduling.
- Multiple-Level Queues Scheduling.
Banker's algorithm is used to avoid deadlock. It is the one of deadlock-avoidance method. It is named as Banker's algorithm on the banking system where bank never allocates available cash in such a manner that it can no longer satisfy the requirements of all of its customers.
Logical address space specifies the address that is generated by CPU. On the other hand physical address space specifies the address that is seen by the memory unit.
Fragmentation is a phenomenon of memory wastage. It reduces the capacity and performance because space is used inefficiently.
- Internal fragmentation: It is occurred when we deal with the systems that have fixed size allocation units.
- External fragmentation: It is occurred when we deal with systems that have variable-size allocation units.
Spooling is a process in which data is temporarily gathered to be used and executed by a device, program or the system. It is associated with printing. When different applications send output to the printer at the same time, spooling keeps these all jobs into a disk file and queues them accordingly to the printer.
Internal commands are the built-in part of the operating system while external commands are the separate file programs that are stored in a separate folder or directory.
Semaphore is a protected variable or abstract data type that is used to lock the resource being used. The value of the semaphore indicates the status of a common resource.
There are two types of semaphore:
- Binary semaphores
- Counting semaphores
Binary semaphore takes only 0 and 1 as value and used to implement mutual exclusion and synchronize concurrent processes.
Belady's Anomaly is also called FIFO anomaly. Usually, on increasing the number of frames allocated to a process virtual memory, the process execution is faster, because fewer page faults occur. Sometimes, the reverse happens, i.e., the execution time increases even when more frames are allocated to the process. This is Belady's Anomaly. This is true for certain page reference patterns.
Starvation is Resource management problem. In this problem, a waiting process does not get the resources it needs for a long time because the resources are being allocated to other processes.
Aging is a technique used to avoid the starvation in resource scheduling system.
Overlays makes a process to be larger than the amount of memory allocated to it. It ensures that only important instructions and data at any given time are kept in memory.
Virtual memory creates an illusion that each user has one or more contiguous address spaces, each beginning at address zero. The sizes of such virtual address spaces is generally very high. The idea of virtual memory is to use disk space to extend the RAM. Running processes don’t need to care whether the memory is from RAM or disk. The illusion of such a large amount of memory is created by subdividing the virtual memory into smaller pieces, which can be loaded into physical memory whenever they are needed by a process.
Thrashing is a situation when the performance of a computer degrades or collapses. Thrashing occurs when a system spends more time processing page faults than executing transactions. While processing page faults is necessary to in order to appreciate the benefits of virtual memory, thrashing has a negative affect on the system. As the page fault rate increases, more transactions need processing from the paging device. The queue at the paging device increases, resulting in increased service time for a page fault.
In a Time-sharing system, the CPU executes multiple jobs by switching among them, also known as multitasking. This process happens so fast that users can interact with each program while it is running.
SMP is a short form of Symmetric Multi-Processing. It is the most common type of multiple-processor systems. In this system, each processor runs an identical copy of the operating system, and these copies communicate with one another as needed.
In asymmetric clustering, a machine is in a state known as hot standby mode where it does nothing but to monitor the active server. That machine takes the active server’s role should the server fails.
RR (round-robin) scheduling algorithm is primarily aimed for time-sharing systems. A circular queue is a setup in such a way that the CPU scheduler goes around that queue, allocating CPU to each process for a time interval of up to around 10 to 100 milliseconds.
One is that it depends on how often a deadlock is likely to occur under the implementation of this algorithm. The other has to do with how many processes will be affected by deadlock when this algorithm is applied.
Direct Access method is based on a disk model of a file, such that it is viewed as a numbered sequence of blocks or records. It allows arbitrary blocks to be read or written. Direct access is advantageous when accessing large amounts of information.
Root partition is where the operating system kernel is located. It also contains other potentially important system files that are mounted during boot time.
VFS, or Virtual File System, separate file system generic operations from their implementation by defining a clean VFS interface. It is based on a file-representation structure known as vnode, which contains a numerical designator needed to support network file systems.
I/O status information provides information about which I/O devices are to be allocated for a particular process. It also shows which files are opened, and other I/O device state.
Multitasking is the process within an operating system that allows the user to run several applications at the same time. However, only one application is active at a time for user interaction, although some applications can run “behind the scene”.
A command line interface allows the user to type in commands that can immediately provide results. Many seasoned computer users are well accustomed to using the command line because they find it quicker and simpler.
However, the main problem with a command line interface is that users have to be familiar with the commands, including the switches and parameters that come with it. This is a downside for people who are not fond of memorizing commands.
Caching is the processing of utilizing a region of fast memory for a limited data and process. A cache memory is usually much efficient because of its high access speed.
An assembler acts as a translator for low-level language. Assembly codes written using mnemonic commands are translated by the Assembler into machine language.
GUI is short for Graphical User Interface. It provides users with an interface wherein actions can be performed by interacting with icons and graphical symbols. People find it easier to interact with the computer when in a GUI especially when using the mouse. Instead of having to remember and type commands, users click on buttons to perform a process.
Preemptive multitasking allows an operating system to switch between software programs. This, in turn, allows multiple programs to run without necessarily taking complete control over the processor and resulting in system crashes.
It is the process of using the output of one program as an input to another. For example, instead of sending the listing of a folder or drive to the main screen, it can be piped and sent to a file, or sent to the printer to produce a hard copy.
NOS is short for Network Operating System. It is a specialized software that will allow a computer to communicate with other devices over the network, including file/folder sharing.
In a cached system, the base addresses of the last few referenced pages is maintained in registers called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently used. Normally, each virtual memory reference causes 2 physical memory accesses- one to fetch appropriate page-table entry, and one to fetch the desired data. Using TLB in-between, this is reduced to just one physical memory access in cases of TLB-hit.
We encounter cycle stealing in the context of Direct Memory Access (DMA). Either the DMA controller can use the data bus when the CPU does not need it, or it may force the CPU to temporarily suspend operation. The latter technique is called cycle stealing. Note that cycle stealing can be done only at specific break points in an instruction cycle.
If one or a few processes have a high access rate to data on one track of a storage disk, then they may monopolize the device by repeated requests to that track. This generally happens with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc). High-density multisurface disks are more likely to be affected by this than low density ones.
Page swapping or page replacements are called page cannibalizing.
- Hardware abstraction layer
- System Services.
The procedure of starting a computer by loading the kernel is known as booting the system.
When a user first turn on or booted the computer, it needs some initial program to run. This initial program is known as Bootstrap Program. It is stored in read-only memory (ROM) or electrically erasable programmable read-only memory (EEPROM). Bootstrap program locates the kernel and loads it into main memory and starts its execution.
It is a program that interprets the command input through keyboard or command batch file. It helps the user to interact with the OS and trigger the required system programs or execute some user application.
Daemon - Disk and execution monitor, is a process that runs in the background without user’s interaction. They usually start at the booting time and terminate when the system is shut down.
- These are dead processes which are not yet removed from the process table.
- It happens when the parent process has terminated while the child process is still running. This child process now stays as a zombie.
- It is an IPC mechanism used for one way communication between two processes which are related.
- A single process doesn't need to use pipe. It is used when two process wish to communicate one-way.
- PCB, process control block, is also called as the task control block.
- It contains information about the process state like new, ready, running, waiting and halt.
- It also includes the information regarding the process priority and pointers to scheduling queues .
- Its counter indicates the address of the next instruction to be executed for the process.
- It basically serves as the storage for any information that may vary from process to process.
- It is the process of switching the CPU from one process to another.
- This requires to save the state of the old process and loading the saved state for the new process.
- The context of the process is represented in the process control block.
- During switching the system does no useful work.
- How the address space is preserved and what amount of work is needed depends on the memory management.
- EIDE is a bus called enhanced integrated drive electronics.
- The input output devices are attached to the computer by a set of wires called the bus.
- The data transfer on a bus are carried out by electronic processes called controllers.
- The host controller sends messages to device controller and device controller performs the operations.
- These device controllers consist of built in cache so that data transfer occurs at faster speed.
- Initially the system will fetch the instruction and stores it in instruction register.
- Instruction is then decoded and may cause operands to be fetched from memory.
- After execution the result is stored in the memory.
- Here the memory unit sees only the memory addresses irrespective of how they are generated.
- Memory unit is also unaware of what addresses are for.
- It is the service called as distributed lock manager.
- In cluster systems to avoid file sharing the distributed systems must provide the access control and file locking.
- This ensures that no conflicting operations occur in the system.
- Here the distributed file systems are not general purpose therefore it requires locking.
RAID stands for Redundant Array of Independent Disks. To improve overall performance the data is stored redundantly and used whenever required. Following are the different RAID levels:
RAID 0 – Striped Disk Array without fault tolerance RAID 1 – Mirroring and duplexing RAID 2 – Memory-style error-correcting codes RAID 3 – Bit-interleaved Parity RAID 4 – Block-interleaved Parity RAID 5 – Block-interleaved distributed Parity RAID 6 – P+Q Redundancy
There are mainly four sections in a process. They are as below:
- Stack: contains local variables, returns address
- Heap: Dynamically allocated memory via malloc, calloc,realloc
- Data: contains global and static variables
- Code or text: contains code, program counter and content of processor’s register.
IPC stands for Inter-Process Communication, and it is a mechanism, in which various processes can communicate with each other with the approval of the Operating system.
It is an abbreviation for Mutual Exclusion. It is a userspace program object which helps multiple threads to access the same resource, but not simultaneously. The sole purpose of mutex to lock a thread with a resource so the other threads can not use the same resource until the first thread finish executing.
Reader-Writer lock is used to prevent data integrity. This lock allows the concurrent access to read operation, which means multiple threads can read data simultaneously. But it does not allow concurrent write, and if one thread is want to modify data via writing, then all the other threads will be blocked from reading or writing data.
The free memory of the system get split into smaller pieces when a process load or removed from the memory. Compaction helps in accumulating these small loose pieces of memory into one substantial chunk so that more memory can be allocated to the other processes.