For roles such as Linux administrators or DevOps engineers, a practical understanding of Linux commands is a must-have. Evaluating a candidate's command-line expertise can be challenging, but it’s a task that needs to be done right.
This blog post provides a carefully curated list of Linux command interview questions, catering to various experience levels from freshers to experienced professionals. We also included a set of MCQs for a quick assessment.
Use these questions to filter candidates and ensure your next hire is a Linux expert. Alternatively, streamline your process further by using Adaface's Linux online test before the interview.
Table of contents
Linux Commands interview questions for freshers
1. What does the `pwd` command do, and why is it useful?
The pwd
command stands for "print working directory". It displays the absolute path of the current directory you are in. This path starts from the root directory (/
).
It's useful because it helps you keep track of your location within the file system, especially when navigating through multiple directories. When you are working in a terminal, it's easy to lose track of where you are. pwd
instantly tells you exactly which directory you're currently in.
2. Can you explain what a directory is in Linux, and how it's different from a file?
In Linux, a directory is a special type of file that contains a list of other files and directories. Think of it as a container that organizes your file system. It doesn't hold data in the same way a regular file does; instead, it holds metadata about the files and directories it contains, such as their names and locations.
The main difference is that a regular file contains data (text, images, executable code, etc.), while a directory contains organizational information about files and other directories. A file can be opened and its contents read or modified, while a directory is used to navigate the file system structure and locate files.
3. If you're lost in the terminal, what command can you use to find your way back home, metaphorically speaking?
The command pwd
(print working directory) helps you find your current location in the file system, like knowing your street address. Then, cd ~
or just cd
will reliably take you back to your home directory, no matter where you are. Think of cd ~
as your 'go home' button. It's the quickest way to get back to familiar territory when you're lost in the terminal's file system.
4. What does the `ls` command show you, and how can you see hidden files with it?
The ls
command lists the files and directories in the current directory (or the directory specified as an argument). By default, it does not show hidden files and directories.
To view hidden files and directories, use the -a
or -al
flags. ls -a
shows all files, including those that begin with a .
(dot), which are typically hidden. ls -al
shows all files in a long listing format, providing details like permissions, owner, size, and modification date, as well as including hidden files.
5. How would you create a new folder named 'MyStuff' using the command line?
To create a new folder named 'MyStuff' using the command line, you would use the mkdir
command followed by the folder name. Here's how it looks:
mkdir MyStuff
This command will create a new directory named 'MyStuff' in your current working directory.
6. If you accidentally create an empty file, what command would you use to remove it?
To remove an empty file in a Unix-like environment (Linux, macOS), I would use the rm
command. Specifically, I would type rm filename
in the terminal, replacing filename
with the actual name of the empty file. This command permanently deletes the file.
7. What command lets you see the contents of a text file directly in the terminal?
Several commands allow you to view the contents of a text file directly in the terminal. The most common and straightforward is cat
. For example, cat filename.txt
will display the entire contents of filename.txt
in the terminal. Other useful commands include less
which is ideal for larger files as it allows you to navigate the file page by page, and head
and tail
which display the beginning and end of a file respectively. head -n 10 filename.txt
will show the first 10 lines. tail -n 10 filename.txt
shows the last 10 lines.
8. Imagine you want to copy a file. What command do you use, and what two things do you need to tell the command?
To copy a file, you would typically use the cp
command in a Unix-like environment (Linux, macOS) or the copy
command in Windows.
To use either of these commands, you need to specify two essential pieces of information:
- The source file: The file you intend to copy.
- The destination: This can be a new file name, or a directory in which to place the copy.
For example, in Linux/macOS:
cp source_file.txt destination_file.txt
cp source_file.txt /path/to/destination/directory/
And in Windows:
copy source_file.txt destination_file.txt
copy source_file.txt C:\path\to\destination\directory\
9. How can you move a file from one folder to another using the command line?
You can move a file from one folder to another using the mv
command. The basic syntax is mv [source_file] [destination_folder]
. For example, to move a file named my_file.txt
from the current directory to a folder named documents
, you would use the command mv my_file.txt documents/
.
If you want to rename the file during the move, you can specify the new name in the destination. For example, mv my_file.txt documents/new_file_name.txt
will move my_file.txt
to the documents
folder and rename it to new_file_name.txt
.
10. What's the difference between the `>` and `>>` operators when redirecting output?
The >
operator redirects output to a file, overwriting the file if it already exists. The >>
operator, on the other hand, appends output to a file. If the file doesn't exist, both operators will create it.
For example:
echo "Hello" > file.txt
will create or overwritefile.txt
with "Hello".echo "World" >> file.txt
will append "World" tofile.txt
, resulting in "Hello\nWorld" if it was the second command run.
11. How can you display the manual page for the 'ls' command?
You can display the manual page for the ls
command using the man
command in the terminal. Simply type man ls
and press Enter. This will open the manual page for the ls
command, providing detailed information about its usage, options, and behavior.
Alternatively, you can also use the info
command, if available on your system. Typing info ls
might give you a more detailed explanation than man ls
.
12. What is the purpose of the 'sudo' command, and when should you use it?
The sudo
command allows a permitted user to execute a command as the superuser (root) or another user, as specified in the /etc/sudoers
file. It's primarily used to perform administrative tasks that require elevated privileges.
You should use sudo
when:
- You need to install software system-wide.
- You need to modify system configuration files.
- You need to start or stop system services.
- Any operation that requires root privileges (e.g.,
chmod
,chown
on system files).
13. How do you change the permissions of a file to make it executable?
To change the permissions of a file to make it executable, you can use the chmod
command in a Unix-like operating system (Linux, macOS, etc.).
There are two primary ways to use chmod
:
- Symbolic Mode:
chmod +x <filename>
This adds execute permission to the file for the current user, group, and others, depending on the system's configuration. You can be more specific by usingchmod u+x <filename>
(user),chmod g+x <filename>
(group), orchmod o+x <filename>
(others). - Numeric Mode:
chmod 755 <filename>
This sets the permissions numerically. 755 means read, write, and execute for the owner, and read and execute for the group and others. The first digit represents the owner, the second the group, and the third the world. Each digit is the sum of the permissions: read (4), write (2), and execute (1). For example, 7 (4+2+1) means read, write, and execute.
14. What does the command `echo 'Hello, World!'` do?
The command echo 'Hello, World!'
prints the string 'Hello, World!' to the standard output (usually your terminal screen).
In simpler terms, echo
is a command that displays text. The single quotes ensure that the text within is treated literally as a string to be output.
15. Explain what a pipe (|) does in the command line.
In the command line, a pipe (|
) is a form of redirection that is used to send the output of one command to the input of another command. It allows you to chain commands together, creating powerful workflows. Essentially, the standard output (stdout) of the command on the left side of the pipe becomes the standard input (stdin) of the command on the right side.
For example, ls -l | grep "myfile"
would first list all files and directories in the current directory using ls -l
, and then it would filter that list to only show lines that contain the word "myfile" using grep
. This avoids the need to create an intermediate file to store the output of ls -l
before filtering it with grep
.
16. How do you search for a specific word inside a file using the command line?
To search for a specific word inside a file using the command line, you can use the grep
command. The basic syntax is grep "word" filename
. For example, to search for the word "example" in the file "my_file.txt", you would use the command grep "example" my_file.txt
.
The grep
command will print each line that contains the specified word. You can also use options like -i
for case-insensitive search (e.g., grep -i "example" my_file.txt
) and -n
to display the line number along with the matching line (e.g., grep -n "example" my_file.txt
).
17. What command shows you a list of currently running processes?
The command ps
shows a list of currently running processes. A commonly used variation is ps aux
, which provides a more detailed output including all users, and more memory/CPU usage information. Alternatively, top
provides a dynamic real-time view of running processes, sorted by CPU usage by default. htop
is an interactive and enhanced version of top
. Also systemctl status
can be used to show the status of systemd services and processes.
18. How can you terminate a process if you know its process ID (PID)?
You can terminate a process given its PID using the kill
command in Unix-like systems (Linux, macOS, etc.). The basic syntax is kill <PID>
. By default, kill
sends the SIGTERM
signal (signal 15), which politely asks the process to terminate. Most processes will gracefully exit upon receiving this signal.
If a process doesn't terminate after using kill <PID>
, you can forcefully terminate it using the SIGKILL
signal (signal 9). The command would be kill -9 <PID>
. However, it is generally better to avoid SIGKILL
unless absolutely necessary, as it doesn't allow the process to clean up resources properly, which can sometimes lead to data corruption or other issues. Always try the standard kill <PID>
first.
19. What is the purpose of the 'tar' command, and how is it used for archiving files?
The tar
(tape archive) command is used to archive multiple files and directories into a single archive file, often referred to as a tarball. It's primarily for bundling files together for easier storage, distribution, or backup.
tar
itself doesn't compress the archive; it only creates a single file containing all the specified files and directories. However, it's commonly used in conjunction with compression tools like gzip
(tar.gz
) or bzip2
(tar.bz2
) to reduce the file size. For example:
tar -cvf archive.tar files*
(creates an archive)tar -czvf archive.tar.gz files*
(creates a compressed archive using gzip)tar -xvf archive.tar
(extracts from archive)tar -tzf archive.tar.gz
(lists the contents of archive)
20. How can you extract files from a `.tar.gz` archive?
To extract files from a .tar.gz
archive, you can use the tar
command with the following options:
tar -xvzf archive_name.tar.gz
Where:
-x
or--extract
: Extract files from the archive.-v
or--verbose
: List the files being extracted (verbose mode).-z
or--gzip
: Decompress the archive using gzip.-f
or--file
: Specify the archive file name.
21. What command displays disk space usage?
The command to display disk space usage is df
. It shows the amount of disk space used and available on file systems.
For a human-readable format (showing sizes in KB, MB, GB, etc.), use df -h
. Another command, du
, is used to estimate file space usage.
22. How would you find all files in a directory (and its subdirectories) that end with '.txt'?
To find all files ending with '.txt' in a directory and its subdirectories, you can use the find
command in Unix-like systems or PowerShell in Windows.
Using
find
(Unix-like systems):find /path/to/directory -type f -name "*.txt"
Replace
/path/to/directory
with the actual path.Using PowerShell (Windows):
Get-ChildItem -Path "C:\path\to\directory" -Recurse -Filter "*.txt"
Replace
C:\path\to\directory
with the actual path.
23. Explain the difference between relative and absolute paths.
Relative paths are defined relative to the current working directory. They specify the location of a file or directory starting from where you are currently located in the file system. For example, if you're in /home/user/documents
and want to access /home/user/documents/reports/annual.pdf
, a relative path would be reports/annual.pdf
.
Absolute paths, on the other hand, provide the complete path from the root directory of the file system. They always start with the root directory (usually /
on Unix-like systems or a drive letter like C:\
on Windows). So, the absolute path to the same file would be /home/user/documents/reports/annual.pdf
. Absolute paths are unambiguous and always point to the same location, regardless of the current working directory.
24. What is the purpose of the `chmod` command?
The chmod
command is used to change the file permissions of files or directories in Unix-like operating systems (like Linux and macOS). It controls who can read, write, and execute a file. These permissions apply to the owner of the file, the group associated with the file, and all other users on the system.
chmod
can use either symbolic notation (e.g., chmod u+x file.txt
to add execute permission for the owner) or octal notation (e.g., chmod 755 file.txt
to set read, write, and execute for the owner, and read and execute for the group and others).
25. How do you display the last 10 lines of a file using the command line?
To display the last 10 lines of a file using the command line, you can use the tail
command. Specifically, you would use the command tail -n 10 filename
, where filename
is the name of the file you want to inspect.
Alternatively, you can use tail -10 filename
. Both commands achieve the same result: displaying the last 10 lines.
Linux Commands interview questions for juniors
1. What does the `pwd` command do? Can you explain it like I'm five?
The pwd
command is like asking your computer 'Where am I right now?'. Imagine your computer's files are like a big house with lots of rooms (folders). pwd
tells you which room you're standing in.
So, if you type pwd
and the computer says '/home/user/documents', it means you're in the 'documents' room, which is inside the 'user' room, which is inside the 'home' room.
2. If you type `ls`, what happens? What kind of things do you see?
When you type ls
in a Unix-like operating system (like Linux or macOS) and press Enter, the shell interprets the command and executes the ls
program. ls
stands for "list" and its primary function is to display a list of the files and directories within the current working directory.
The output typically shows the names of the files and directories. By default, it will list the contents in alphabetical order. Additional information can be shown using various options with the command, such as:
-l
: Shows a long listing format, including permissions, number of links, owner, group, size, modification date and time, and name.-a
: Shows all files, including hidden files (those starting with a dot.
).-t
: Sorts the listing by modification time, newest first.-R
: Recursively lists subdirectories.-h
: Displays file sizes in a human-readable format (e.g., 1K, 234M, 2G).
3. What's the difference between `ls` and `ls -l`? What extra information do you get?
ls
lists the names of files and directories in the current directory. ls -l
provides a long listing which includes significantly more detail.
The extra information provided by ls -l
includes:
- File Permissions: Read, write, and execute permissions for the owner, group, and others.
- Number of Hard Links: The number of hard links to the file.
- Owner: The username of the file's owner.
- Group: The group name of the file's group.
- File Size: The size of the file in bytes.
- Last Modified Time: The date and time when the file was last modified.
4. How would you make a new folder called 'my_stuff'?
To create a new folder named 'my_stuff', you would use the mkdir
command in the terminal. The command is simple:
mkdir my_stuff
This command will create a new directory named 'my_stuff' in the current working directory. You can then navigate into this folder using the cd my_stuff
command.
5. How do you go *into* the 'my_stuff' folder you just made?
To navigate into the 'my_stuff' directory, you would use the cd
command in the terminal. cd
stands for 'change directory'.
The specific command would be:
cd my_stuff
This command will change your current working directory to 'my_stuff'.
6. You're lost in the terminal! How do you go back to your 'home' folder?
To navigate back to your home directory in the terminal, you can use the cd
command without any arguments. Simply type cd
and press Enter. Alternatively, you can also use cd ~
, where ~
is a shortcut that represents your home directory path.
Both commands will return you to the default home directory for your user account. For example:
cd
cd ~
7. How can you create an empty file called 'notes.txt'?
You can create an empty file named notes.txt
using several methods depending on your operating system.
- Command Line (Linux/macOS/Windows): Use the
touch
command:touch notes.txt
. This command updates the file's timestamp, but if the file doesn't exist, it creates an empty one. - Python:
This opens the file in write mode (open('notes.txt', 'w').close()
'w'
), which creates the file if it doesn't exist, and then immediately closes it, resulting in an empty file. - Windows Command Prompt: Use the
type
command:type nul > notes.txt
. This redirects the null output to create the file.
8. What does the command `cat notes.txt` do?
The command cat notes.txt
displays the contents of the file named notes.txt
to the standard output (usually your terminal screen). cat
is short for 'concatenate', and while it can be used to join multiple files, its simplest use is to display a single file's content. If the file doesn't exist or the user doesn't have permission to read it, an error message will be displayed.
9. If you have a file called 'secrets.txt', how do you see what's inside?
To view the contents of a file named secrets.txt
, you can use several command-line tools. The most common is cat secrets.txt
, which will print the entire file to your terminal. Alternatively, you can use less secrets.txt
to view the file page by page, allowing you to navigate through larger files more easily. head secrets.txt
will show you the first few lines, and tail secrets.txt
will show you the last few. These tools are available in most Unix-like environments (Linux, macOS). If you're on Windows, you can use type secrets.txt
in the command prompt, or use a text editor like Notepad to open and view the file.
Each method has its use case. If the file is relatively small, cat
is often the quickest. For larger files, less
is better due to its pagination and search capabilities. head
and tail
are useful for quickly checking the beginning or end of a file without viewing the whole thing.
10. Someone told you to use 'sudo'. What does that scary word mean?
sudo
stands for "SuperUser DO". It's a command used in Unix-like operating systems (like Linux and macOS) that allows a user to execute commands with the privileges of the superuser (root). Essentially, it temporarily elevates your permissions to perform actions that require administrative rights. Be careful when using sudo
, as incorrect commands executed with superuser privileges can potentially damage your system.
Using sudo
is like saying "I know what I'm doing, and I need to do this as an administrator." before executing a command. For example, if you need to edit a system file, you might use sudo nano /etc/hosts
to open the hosts
file with the nano
text editor, running as root. Without sudo
, you might not have the necessary permissions to save changes to the file.
11. How do you copy a file named 'document.txt' to a new file named 'copy_of_document.txt'?
To copy a file named document.txt
to a new file named copy_of_document.txt
, you can use the cp
command in a Unix-like environment (like Linux or macOS). The command would be:
cp document.txt copy_of_document.txt
This command tells the system to copy the contents of document.txt
and create a new file named copy_of_document.txt
with the same content. If copy_of_document.txt
already exists, it will be overwritten. In Windows, you can use the copy
command in the command prompt:
copy document.txt copy_of_document.txt
12. What command would you use to remove a file named 'old_file.txt'?
To remove a file named 'old_file.txt', you would use the rm
command in a Unix-like environment (Linux, macOS, etc.).
rm old_file.txt
This command permanently deletes the specified file. Be cautious when using rm
, as the file is typically not recoverable.
13. How would you find all files ending with '.txt' in your current directory?
To find all files ending with '.txt' in the current directory, I would use the find
command or ls
command combined with grep
in a Unix-like environment (e.g., Linux, macOS). Here are a couple of examples:
Using find
:
find . -name "*.txt"
Using ls
and grep
:
ls | grep ".txt$"
The find
command is generally preferred because it's more robust, especially when dealing with filenames containing spaces or special characters.
14. Explain what a 'directory' is, in Linux terms.
In Linux, a directory is a special type of file that serves as a container to organize other files and directories. Think of it as a folder in a graphical user interface. Directories are fundamental to the hierarchical file system structure, allowing users to group related files together for easier management and navigation.
Technically, a directory contains entries that map filenames to inodes. An inode holds metadata about the file, such as its permissions, size, and location on the disk. So, when you access a file through its name, the system uses the directory entry to find the inode and then retrieve the file's data.
15. How can you see the last 10 lines of a big file called 'log.txt'?
To see the last 10 lines of a big file called log.txt
, you can use the tail
command in Unix-like operating systems.
Simply open your terminal and type tail -n 10 log.txt
. This command will display the last 10 lines of the specified file. If you want to view the lines dynamically as they're added to the file, you can use tail -f log.txt
. This 'follows' the file, continuously displaying new lines as they are written.
16. If you accidentally type a command wrong, how can you fix it without retyping the whole thing?
You can use a few different approaches to fix a mistyped command without retyping everything:
- Arrow Keys: Use the left and right arrow keys to move the cursor to the point where the error occurred, and then make your corrections.
- Ctrl+a/e:
Ctrl+a
moves the cursor to the beginning of the line, andCtrl+e
moves it to the end. - History: Use the up arrow key to recall the previously entered command. Then, edit the command as needed using the arrow keys or
Ctrl+a/e
. fc
command (fix command): Typefc
and press Enter. This opens the last command in your default text editor. After editing, save and close the editor; the corrected command will then execute.fc <number>
opens the command with specified number in history. For example,fc 30
opens the 30th command in your history in the text editor.!!
and^
:!!
executes the last command. You can use^old^new^
to replace "old" with "new" in the last command and execute it. For example, if you typedapt-get install fireefox
, you could correct it with^fireefox^firefox^
.
17. What does the command `man ls` do? Why is it helpful?
man ls
displays the manual page for the ls
command in the terminal. The man
command is used to access the system's manual pages.
It's helpful because it provides detailed information about the ls
command, including its syntax, options (flags), and usage examples. This allows users to understand how to use ls
effectively and discover options they might not have known about, improving their command-line proficiency. For example, man ls
reveals options like -a
(show hidden files), -l
(long listing format), -t
(sort by modification time), etc.
18. How would you rename a file called 'report.old' to 'report.new'?
To rename a file called 'report.old' to 'report.new', you can use the mv
command in Unix-like systems (Linux, macOS) or the Rename-Item
cmdlet in PowerShell on Windows.
- Unix-like systems:
mv report.old report.new
- PowerShell:
Rename-Item report.old report.new
19. What is the difference between a relative and an absolute path?
A relative path specifies the location of a file or directory relative to the current working directory. It doesn't start with a root directory (e.g., /
on Linux/macOS or C:\
on Windows). For example, mydir/myfile.txt
tells the system to look for myfile.txt
inside the mydir
directory, which is located in the current directory.
An absolute path, on the other hand, specifies the complete path to a file or directory, starting from the root directory. It provides the exact location, regardless of the current working directory. An example on Linux/macOS is /home/user/mydir/myfile.txt
, and on Windows, it might be C:\Users\user\mydir\myfile.txt
. Using absolute paths ensures the system always knows exactly where to find the file or directory.
20. If you need help with a command, what's the first thing you should try?
The first thing I would try is using the command's built-in help functionality. Most command-line tools offer help information, often accessed using flags like --help
or -h
. For example, if I'm unsure about the ls
command, I'd run ls --help
or man ls
to display its manual page.
Linux Commands intermediate interview questions
1. How can you find files modified in the last 24 hours, but only those larger than 1MB?
To find files modified in the last 24 hours and larger than 1MB, you can use the find
command in Unix-like systems. Here's the command:
find . -type f -mtime -1 -size +1M
.
: Specifies the current directory as the starting point.-type f
: Limits the search to files only.-mtime -1
: Finds files modified in the last 24 hours.-1
means less than 1 day.-size +1M
: Finds files larger than 1MB. The+
sign means greater than.
2. Explain how to use `awk` to print specific columns from a file, separated by a custom delimiter.
To print specific columns from a file using awk
, you use the -F
option to specify the input field separator (delimiter) and then reference the columns using $1
, $2
, etc. to represent the first, second, etc., columns. You can use OFS
to set the output field separator. For example, to print the first and third columns of a file named data.txt
, separated by a comma, you would use the following command:
awk -F" " 'BEGIN {OFS=","} {print $1, $3}' data.txt
In this example, -F" "
sets the input field separator to a space. BEGIN {OFS=","}
sets the output field separator to a comma before processing any input lines. The {print $1, $3}
part tells awk
to print the first and third columns of each line, using the output field separator (comma) to separate them.
3. Describe a scenario where you'd use `xargs` and explain how it works with a practical example.
I would use xargs
when I need to execute a command on a large number of input files or data received from stdin
, where the number of arguments might exceed the command-line length limit. xargs
takes the input (typically a list of filenames or data) from stdin
, breaks it into smaller chunks, and then executes the specified command with those chunks as arguments.
For example, let's say I have a directory with thousands of .txt
files and I want to count the number of lines in each of them using wc -l
. I could use find . -name "*.txt" | xargs wc -l
. find
will locate all .txt
files and output their names to stdout
. xargs
receives this list, groups the filenames into manageable chunks and executes wc -l
for each chunk, preventing the "argument list too long" error. If the number of files is small, wc -l $(find . -name "*.txt")
would have worked but this is unsafe as file names with spaces or other unusual characters can cause problems; xargs
avoids these by quoting arguments when running the command. Another useful flag is -n <number>
which specifies the max number of arguments per command. Ex: find . -name "*.txt" | xargs -n 10 wc -l
will run wc -l
on batches of 10 files at a time.
4. How do you archive and compress a directory while excluding specific subdirectories or file types?
To archive and compress a directory while excluding specific subdirectories or file types, I'd typically use the tar
command in conjunction with the --exclude
option and gzip
or bzip2
for compression. For example, to create a gzip-compressed archive named archive.tar.gz
of a directory my_directory
, excluding a subdirectory named excluded_dir
and all .log
files, the command would be:
tar -czvf archive.tar.gz my_directory --exclude='my_directory/excluded_dir' --exclude='my_directory/*.log'
The -c
option creates the archive, -z
uses gzip for compression, -v
provides verbose output (optional), and -f
specifies the archive file name. Multiple --exclude
options can be used to exclude several directories or file types. Alternatively, use a file containing the exclusion patterns and specify it with --exclude-from=file_with_exclusions
.
5. Explain how to redirect standard output and standard error separately to different files.
In Unix-like systems, you can redirect standard output (stdout) and standard error (stderr) to different files using shell redirection operators. >
redirects stdout, and 2>
redirects stderr.
For example, to redirect stdout to output.txt
and stderr to errors.txt
for a command named my_program
, you would use: my_program > output.txt 2> errors.txt
. This ensures that normal output goes to one file and error messages to another, making debugging and log analysis easier.
6. How do you monitor the real-time disk usage of a specific directory?
To monitor real-time disk usage of a specific directory, you can use a combination of command-line tools and scripting. A common approach involves using du
(disk usage) in conjunction with watch
. For example, watch -n 1 du -sh /path/to/directory
will update the disk usage summary every second. The -s
flag provides a summary, -h
makes the output human-readable, and -n 1
specifies the update interval in seconds.
Alternatively, a script can be written to capture the du
output and compare it over time. This allows for more sophisticated monitoring and alerting if the disk usage exceeds a certain threshold. For instance, you could periodically run du -sb /path/to/directory
(using -b
for bytes) and store the results, comparing the current size to previous values to detect rapid growth or unusual patterns.
7. Describe how to use `sed` to replace multiple different patterns in a file with a single command.
You can use sed
to replace multiple patterns in a single command using several approaches. One common method is using the -e
option to chain multiple substitution commands. For example, sed -e 's/pattern1/replacement1/g' -e 's/pattern2/replacement2/g' file.txt
will replace all occurrences of pattern1
with replacement1
and then all occurrences of pattern2
with replacement2
in file.txt
. The g
flag ensures global replacement (all occurrences on each line).
Another approach utilizes a single sed
command with multiple substitutions separated by semicolons. This looks like: sed 's/pattern1/replacement1/g; s/pattern2/replacement2/g' file.txt
. Both methods achieve the same result, allowing for efficient replacement of multiple patterns without needing to run sed
multiple times. Remember to escape special characters within the patterns or replacements as needed.
8. Explain how to find processes consuming the most memory and display them sorted by memory usage.
To find processes consuming the most memory and display them sorted by memory usage on Linux-based systems, you can use the following command:
ps -eo pid,ppid,%mem,command --sort=-%mem | head
This command does the following:
ps -eo pid,ppid,%mem,command
: This part retrieves process information, specifically the process ID (pid), parent process ID (ppid), percentage of memory usage (%mem), and the command being executed.--sort=-%mem
: Sorts the processes in descending order based on memory usage. The-
indicates descending order.head
: Displays the top processes with the highest memory consumption. You can adjust the number of processes displayed by changing the argument ofhead
.
9. How can you schedule a command to run every 15 minutes using `cron`, and what are some potential pitfalls?
To schedule a command to run every 15 minutes using cron
, you can add the following line to your crontab file:
*/15 * * * * command_to_execute
Some potential pitfalls include: not having the correct permissions to edit the crontab, using relative paths instead of absolute paths for commands (cron's environment is minimal), and timezone differences between the server and your local machine, which can lead to unexpected execution times. Also, ensure that the command itself doesn't have any issues that could cause it to fail silently, and check cron logs (/var/log/syslog
or /var/log/cron
) for errors.
10. Explain how to use `rsync` to synchronize two directories, ensuring only changed files are copied.
rsync
is a powerful tool for synchronizing directories. To synchronize two directories, ensuring only changed files are copied, you can use the basic command rsync -avz source_directory/ destination_directory/
. The source_directory
is where the original files are, and the destination_directory
is where you want to copy or update them. The trailing slashes are important: they mean the contents of the source directory are copied into the destination directory.
The options -a
, -v
, and -z
are commonly used. -a
(archive) preserves permissions, ownership, timestamps, symbolic links, and copies directories recursively. -v
(verbose) increases the amount of information you see during the transfer, showing the files being copied. -z
(compress) compresses the data during transfer, which can be useful over slow networks. rsync
efficiently checks file timestamps and sizes to only copy files that have changed since the last synchronization. Changed blocks inside a file are also detected and copied. It also deletes files from the destination that are not present in the source if you use the --delete
option.
11. How can you display the last 10 lines of a file, but only if the file exists?
To display the last 10 lines of a file only if it exists, you can use a combination of shell commands. Here's how you can do it:
if [ -f "filename" ]; then
tail -n 10 "filename"
fi
This script first checks if the file "filename" exists using the [ -f "filename" ]
condition. If the file exists, the tail -n 10 "filename"
command will print the last 10 lines of the file. Otherwise, nothing will be printed. Replace "filename" with the actual name of your file.
12. Explain how to use `cut` to extract data based on character position, and when it is more appropriate than `awk`.
The cut
command extracts sections from each line of a file or input stream. When using it to extract based on character position, the -c
flag is used, followed by a list or range of character positions. For example, cut -c 1-5 file.txt
extracts the first five characters of each line in file.txt
. You can also specify a single character (-c 3
), a comma-separated list of characters (-c 1,3,5
), or characters from a starting position till the end of line (cut -c 5-
).
cut
is generally more appropriate than awk
when you need to extract data based purely on character positions or a single delimiter, and when the logic is straightforward. awk
is better when the extraction logic is more complex, involving multiple delimiters, calculations, or conditional statements based on the content of the fields or lines. For simple, character-based or single-delimiter field extractions, cut
is often faster and more concise. awk
is more powerful but also has a steeper learning curve and can be slower for simple tasks.
13. How can you use `watch` to monitor the output of a command and highlight changes?
You can use the watch
command with the -d
or --differences
option to highlight the differences between successive updates of a command's output. watch
repeatedly executes a command and displays its output. When the -d
option is used, watch
will highlight the portions of the output that have changed since the last iteration. This makes it easy to see what is being modified, making it helpful for monitoring logs or system status. For instance:
watch -d 'ls -l'
This command executes ls -l
every 2 seconds (default interval) and highlights any changes in the directory listing. The interval can be changed using -n
flag followed by seconds e.g. watch -n 1 -d 'ls -l'
.
14. Describe how to find all files in a directory that are owned by a specific user and group.
To find all files in a directory owned by a specific user and group, you can use the find
command in a Unix-like environment. The command would look something like this:
find /path/to/directory -user username -group groupname -print
Replace /path/to/directory
with the actual directory path, username
with the specific username, and groupname
with the specific group name. The -print
option (often implied) ensures that the found files are printed to the standard output. If you need to execute a command on each found file, you can use -exec command {} \;
instead of -print
.
15. Explain how to use `netstat` or `ss` to identify which process is listening on a specific port.
To identify the process listening on a specific port using netstat
, you can use the following command: netstat -tulnp | grep :<port_number>
. Replace <port_number>
with the actual port number you're interested in. The -tulnp
options stand for: -t
for TCP connections, -u
for UDP connections, -l
for listening sockets, -n
for numeric addresses and port numbers (avoiding DNS lookups), and -p
to display the PID and program name. The output will show the process ID (PID) and the name of the program listening on that port.
Alternatively, ss
(socket statistics) is a more modern tool. The command ss -tulnp | grep :<port_number>
achieves the same result. Similar to netstat
, replace <port_number>
with the port number. The ss
command also utilizes the -t
, -u
, -l
, -n
, and -p
options to display TCP, UDP, listening sockets, numeric ports, and process information, respectively. ss
is generally faster and provides more information than netstat
.
16. How would you count the number of lines in all `.txt` files within a directory and its subdirectories?
To count the number of lines in all .txt
files within a directory and its subdirectories, you can use a combination of shell commands. A common approach is to use find
to locate the files and wc -l
to count the lines. For example, the following command would accomplish this:
find . -name "*.txt" -print0 | xargs -0 wc -l
This command first uses find
to locate all files ending in .txt
starting from the current directory (.
). The -print0
option ensures that filenames are separated by null characters, which is safer when dealing with filenames containing spaces or special characters. Then, xargs -0
takes the null-separated list of filenames and passes them to wc -l
, which counts the lines in each file. The output will show the number of lines per file, as well as a total count.
17. Explain the difference between hard links and symbolic links, and when you might use each.
Hard links and symbolic links (symlinks) are both ways to create references to files in Unix-like systems, but they differ significantly in how they work.
- Hard Links: Act like additional names for the same underlying inode (data structure representing the file). All hard links have equal status. Deleting one hard link doesn't delete the file's content as long as at least one hard link remains. You cannot create hard links across different file systems or to directories (usually).
- Symbolic Links: Are essentially pointers to another file or directory. They contain the path to the target. Deleting a symlink doesn't affect the original file. However, if you delete the target file, the symlink becomes a "dangling" link (it points to a non-existent file). Symlinks can cross file system boundaries and can point to directories.
When to use which:
- Hard Links: Useful when you want to give a file another name within the same file system and you want to ensure the file persists as long as at least one of its links exists. Avoid if you need links across file systems.
- Symbolic Links: Suitable when you need a link across file systems, want to link to a directory, or need a way to indicate a file has been moved or renamed without breaking existing references. They are also suitable when it's acceptable for the link to become invalid if the original file is deleted.
18. How can you use `grep` to find lines that contain either 'foo' or 'bar' but not both?
You can use grep
with a combination of options to achieve this. The basic idea is to first find lines containing either 'foo' or 'bar', and then exclude the lines that contain both.
Here's how you can do it:
grep -E 'foo|bar' file.txt | grep 'foo' | grep -v 'bar'
grep -E 'foo|bar' file.txt | grep 'bar' | grep -v 'foo'
Alternatively, using awk
can be more concise:
awk '(/foo/ || /bar/) && !(/foo/ && /bar/)' file.txt
The awk
command checks if a line contains 'foo' or 'bar', and then checks if it doesn't contain both. The file.txt
is replaced with your filename.
19. Explain how to create a simple shell script that takes command-line arguments and uses them.
To create a shell script that takes command-line arguments, you can access them using positional parameters $1
, $2
, $3
, and so on. $0
represents the script's name itself. Here's a simple example:
#!/bin/bash
echo "Script name: $0"
echo "First argument: $1"
echo "Second argument: $2"
echo "Number of arguments: $#"
echo "All arguments: $*"
Save this as a .sh
file (e.g., my_script.sh
), make it executable (chmod +x my_script.sh
), and run it with arguments like ./my_script.sh arg1 arg2 arg3
. The script will then print the script name, each argument passed, the number of arguments, and all arguments as a single string. $#
holds the number of arguments, and $*
expands to all arguments.
20. How do you find the largest files on the entire system, excluding certain mount points like `/proc` and `/sys`?
To find the largest files on the entire system, excluding specific mount points, I would use the find
command in conjunction with du
and sort
. The basic approach is to use find
to locate files, du
to calculate their size, and sort
to order them by size. To exclude mount points like /proc
and /sys
, I'd use the -xdev
option with find
.
Here's an example command:
find / -xdev -type f -print0 | xargs -0 du -h | sort -rh | head -n 10
This command works as follows:
find / -xdev -type f -print0
: Finds all files (-type f
) starting from the root directory (/
), staying within the same filesystem (-xdev
which excludes other mount points like/proc
and/sys
), and prints their names separated by null characters (-print0
).xargs -0 du -h
: Takes the null-separated list of files fromfind
and passes them todu -h
, which calculates the disk usage of each file in human-readable format.sort -rh
: Sorts the output fromdu -h
in reverse numerical order (-r
) based on the human-readable size (-h
).head -n 10
: Displays the top 10 largest files.
Linux Commands interview questions for experienced
1. How would you diagnose and resolve a situation where a critical system service is consuming excessive CPU resources?
First, identify the process using tools like top
, htop
, or ps
to confirm the high CPU usage. Once identified, use profiling tools like perf
, strace
or a Java profiler (e.g., VisualVM, JProfiler) if it's a Java application, to pinpoint the specific functions or code sections causing the spike. Analyze logs for errors or patterns that correlate with the high CPU usage.
To resolve, address the root cause identified during diagnosis. This might involve optimizing code, fixing memory leaks, addressing excessive I/O operations, tuning database queries, or increasing hardware resources. Apply changes incrementally, monitor the system after each change, and revert if needed to avoid further instability. Ensure adequate monitoring and alerting are in place to detect similar issues proactively in the future.
2. Describe your approach to troubleshooting network connectivity issues on a Linux server, considering various tools and techniques.
When troubleshooting network connectivity on a Linux server, I typically start with the basics. First, I use ping
to check if I can reach the destination IP address or hostname. If ping fails, I investigate the local network configuration using ip addr
to verify the server's IP address, netmask, and gateway are correctly configured. ip route
helps check the routing table.
Next, I would examine firewall rules using iptables -L
or firewall-cmd --list-all
depending on the firewall in use. I also use netstat -tulnp
or ss -tulnp
to check which services are listening on which ports. tcpdump
or wireshark
allows packet capture for deeper inspection of network traffic. DNS resolution issues are checked with nslookup
or dig
. If the issue persists, I'll examine system logs (/var/log/syslog
, /var/log/messages
, etc.) for any relevant error messages.
3. Explain how you would implement a robust backup and recovery strategy for a Linux-based database server.
A robust backup and recovery strategy for a Linux-based database server involves several layers. First, implement regular full backups, supplemented by incremental or differential backups to minimize data loss and backup time. Tools like mysqldump
, pg_dump
, rsync
, or specialized database backup utilities can be used. Store these backups on a separate storage device or cloud service to protect against hardware failures. Regularly test the recovery process to ensure backups are valid and the restoration procedure is well-defined and effective.
Second, consider implementing replication or clustering for high availability. Database replication creates copies of the data on multiple servers, ensuring that if one server fails, another can take over with minimal downtime. Configure monitoring tools to detect failures promptly and automate the failover process. Document the entire backup and recovery procedure, including step-by-step instructions and contact information for responsible personnel. Automate as much of the backup and recovery process as possible to reduce human error.
4. Walk me through the steps you would take to harden a Linux server against common security threats and vulnerabilities.
- Keep the system updated: Regularly apply security patches using package managers like
apt
oryum
. This addresses known vulnerabilities. - Strong Passwords and Account Management: Enforce strong password policies, disable default accounts, and remove unnecessary user accounts. Consider using SSH keys for authentication instead of passwords.
- Firewall Configuration: Enable and configure a firewall (e.g.,
iptables
,firewalld
,ufw
) to restrict network access to essential services only. Only open ports that are absolutely necessary. - Disable Unnecessary Services: Disable or remove any services that are not required for the server's function. This reduces the attack surface.
- Intrusion Detection/Prevention System (IDS/IPS): Implement an IDS/IPS like
Fail2Ban
to automatically block malicious IP addresses that exhibit suspicious behavior. - Regular Security Audits: Periodically review system logs and configurations to identify potential security issues. Use tools like
Lynis
orTiger
for automated security auditing. - File System Security: Set appropriate file permissions and ownership to prevent unauthorized access to sensitive data.
- SELinux/AppArmor: Consider enabling and configuring SELinux or AppArmor for mandatory access control, which provides an additional layer of security by restricting the actions that processes can take.
- SSH Hardening: Disable root login via SSH, change the default SSH port, and use key-based authentication. Limit accepted ciphers and MACs in the
sshd_config
file. - Logging and Monitoring: Configure comprehensive logging and monitoring to detect and respond to security incidents. Use tools like
auditd
to track system events.
5. How do you optimize the performance of a Linux server running a high-traffic web application?
Optimizing a high-traffic Linux web server involves several key areas. First, optimize the web server itself (like Apache or Nginx) by tuning parameters such as KeepAlive
, MaxClients
, and caching. Enable compression (gzip/brotli) to reduce data transfer size. Second, optimize the database. Use indexes appropriately, optimize queries, and consider caching frequently accessed data with tools like Redis or Memcached. Finally, optimize the OS by using a modern kernel, configure appropriate TCP settings, and monitor resource usage (CPU, memory, disk I/O) with tools like top
, htop
, iotop
, and vmstat
to identify bottlenecks. Also, consider using a Content Delivery Network (CDN) to cache static assets closer to users, reducing the load on the origin server.
6. Describe your experience with containerization technologies like Docker and Kubernetes in a Linux environment.
I have experience using Docker for containerizing applications in Linux environments, including building Docker images using Dockerfiles, managing containers with Docker Compose, and pushing images to container registries like Docker Hub. I'm familiar with concepts like layers, volumes, and networking within Docker. I've also worked with Kubernetes for orchestrating container deployments, scaling applications, and managing resources. This includes deploying applications using YAML manifests, managing deployments, services, and pods, and using kubectl to interact with Kubernetes clusters. I understand concepts like namespaces, deployments, services (NodePort, LoadBalancer), and ingress. I have experience with troubleshooting container related issues, monitoring container health and resource utilization.
7. Explain how you would automate the deployment and configuration of Linux servers using tools like Ansible or Puppet.
I would automate Linux server deployment and configuration using Ansible. First, I'd create an Ansible inventory file listing the target servers. Then, I'd develop Ansible playbooks. These playbooks would define the desired state of each server, including tasks like installing packages (e.g., apt install nginx
), configuring services (e.g., modifying /etc/nginx/nginx.conf
), managing users, and deploying applications. I would use version control (e.g., Git) to manage the playbooks.
To execute the deployment, I'd run the Ansible playbook against the inventory. Ansible would connect to the servers via SSH and execute the tasks defined in the playbook, ensuring idempotent operations (only making changes if necessary). For more complex deployments, I'd leverage Ansible roles to modularize the playbook and promote reusability. This allows to manage large and complex infrastructure setups efficiently and consistently.
8. How do you monitor the health and performance of a Linux system and proactively identify potential issues?
I monitor the health and performance of Linux systems using a combination of tools and techniques. Key areas include CPU usage, memory utilization, disk I/O, network traffic, and system logs. I leverage tools like top
, htop
, vmstat
, iostat
, netstat
, and sar
for real-time monitoring and historical data analysis.
Proactive identification involves setting up thresholds and alerts using tools like Nagios, Prometheus, or even simple scripting with cron jobs. For example, I might set an alert if CPU usage exceeds 80% or if disk space falls below 10%. Log aggregation and analysis tools such as the ELK stack (Elasticsearch, Logstash, Kibana) help in identifying unusual patterns or errors that might indicate potential problems before they escalate.
9. Describe a time when you had to debug a complex performance issue on a Linux server. What tools and techniques did you use?
During a high-traffic period, our web application experienced significant slowdowns. Initially, I suspected database bottlenecks. I used top
and htop
to monitor CPU, memory, and processes, revealing high CPU utilization by the web server processes. Next, I employed strace
on the affected processes to observe system calls, identifying excessive disk I/O. Further investigation with iotop
pinpointed slow read operations on specific log files.
It turned out a misconfigured logging library was writing verbose debug information to disk, overwhelming the I/O subsystem. After adjusting the logging level and rotating the logs more frequently, the performance issue was resolved. We also implemented monitoring with tools like Prometheus
and Grafana
to proactively detect such issues in the future and setup alerts.
10. Explain how you would manage and troubleshoot user authentication and authorization in a Linux environment.
To manage and troubleshoot user authentication and authorization in Linux, I'd start by examining the /etc/passwd
, /etc/shadow
, and /etc/group
files for user and group account information. For authentication issues, I'd check the /var/log/auth.log
(or similar, depending on the distribution) for error messages related to failed login attempts. Common problems include incorrect passwords, locked accounts, or disabled accounts. I'd use commands like passwd
, chage
, and usermod
to manage user accounts and password policies.
For authorization problems, I'd focus on file permissions and ownership. I'd use ls -l
to inspect permissions and chown
and chmod
to modify them as needed. I'd also check for any Access Control Lists (ACLs) using getfacl
and modify them with setfacl
if necessary. Additionally, I'd investigate sudo
configurations in /etc/sudoers
using visudo
to ensure users have the correct elevated privileges. SElinux is another factor where I would check the output from sestatus
command if enabled, or audit.log
to ensure there are no access denials.
11. How would you go about auditing the security of a Linux system to identify potential vulnerabilities?
Auditing a Linux system for security vulnerabilities involves a multi-faceted approach. First, I'd perform vulnerability scanning using tools like Nmap
, OpenVAS
, or Nessus
to identify known vulnerabilities in installed software. These tools check for outdated packages, misconfigurations, and common security flaws. I would also check the system configuration by going through the system logs for any suspicious activities. Regularly monitoring logs with tools like auditd
or syslog
can help catch intrusion attempts or policy violations.
Second, I'd conduct manual checks focusing on user accounts, permissions, and file integrity. This includes reviewing /etc/passwd
and /etc/shadow
for weak or default passwords, verifying proper file permissions, and using tools like Tripwire
or AIDE
to detect unauthorized file modifications. Also, checking for rootkits using tools like rkhunter
and chkrootkit
would be part of the audit process. Reviewing firewall rules, and ensuring proper network segmentation, is crucial. Properly configured firewalls (iptables/nftables) and network segmentation prevent lateral movement within the network in case of a breach.
12. Describe your experience with scripting languages like Bash or Python for automating tasks on Linux systems.
I have extensive experience using Bash and Python for automating tasks on Linux systems. With Bash, I've created scripts for system administration tasks such as user management, log rotation, and system backups. I'm proficient in using standard utilities like sed
, awk
, and grep
within Bash scripts to manipulate text and data.
I've also used Python extensively for more complex automation. I've written scripts to automate application deployments, monitor system performance using libraries like psutil
, and interact with REST APIs using the requests
library. For example, I've created Python scripts to automatically provision virtual machines and configure network settings using cloud provider APIs. I often use libraries such as os
, subprocess
, and shutil
for file system interactions and command execution. Furthermore, I routinely employ version control (Git) for these scripts and follow best practices for code quality and maintainability.
13. Explain how you would implement and manage a firewall on a Linux server to protect it from unauthorized access.
I would typically use iptables
or firewalld
to implement and manage a firewall on a Linux server. firewalld
is generally preferred on newer systems. First, I would define the desired security policy, blocking all incoming traffic by default and then selectively opening ports necessary for legitimate services (e.g., port 22 for SSH, port 80/443 for HTTP/HTTPS). With firewalld
, this involves defining zones (e.g., public, private, trusted) and assigning network interfaces to appropriate zones. I'd then use firewall-cmd
to open specific ports and services within those zones, making sure to make the rules permanent using the --permanent
flag. Regularly reviewing logs (/var/log/firewalld
or /var/log/messages
) would help identify potential security breaches or misconfigurations. For iptables
, I would define similar rules using the iptables
command, ensuring to save the rules to persist across reboots, for example using iptables-save > /etc/iptables/rules.v4
and restore them during boot process.
14. How would you troubleshoot a situation where a Linux server is experiencing high disk I/O?
To troubleshoot high disk I/O on a Linux server, I'd start by identifying the processes causing the I/O. Tools like iotop
or dstat
can pinpoint which processes are reading from or writing to the disk heavily. Once identified, I'd investigate the process's configuration and behavior. For example, if it's a database, I'd check its query logs and caching configuration. If it's a log aggregator, I'd see if its configured to write too verbose logs.
15. Describe your experience with managing and troubleshooting Linux kernel modules.
My experience with Linux kernel modules involves both managing and troubleshooting them. I've worked with modules for custom hardware interfaces and device drivers. My typical workflow includes using modprobe
to load and unload modules, lsmod
to list currently loaded modules, and dmesg
to check for any module-related errors during loading or operation. When troubleshooting, I often use printk
within the module code to output debugging information and examine the kernel logs. I also utilize tools like gdb
with kdump
to analyze kernel crashes related to faulty modules. I've also managed module dependencies using module.dep files.
16. Explain how you would configure and manage a DNS server on a Linux system.
To configure and manage a DNS server on Linux, I'd typically use BIND (Berkeley Internet Name Domain). First, I'd install the bind9
package using my distribution's package manager (e.g., apt install bind9
on Debian/Ubuntu). Then, I'd configure the primary configuration file, usually located at /etc/bind/named.conf.options
, to set up forwarding and listening interfaces. Zone files in /etc/bind/named.conf.local
would be configured to define DNS zones and their corresponding resource records (A, CNAME, MX, etc.).
Managing the DNS server involves using tools like rndc
to reload the configuration after changes (rndc reload
). I'd also monitor logs (typically in /var/log/syslog
or a BIND-specific log file) for errors or unusual activity. Updating DNS records involves editing the zone files and incrementing the serial number to ensure proper propagation. Tools like nslookup
or dig
would be used for testing and troubleshooting.
17. How would you go about analyzing and interpreting system logs on a Linux server to identify potential problems?
To analyze system logs on a Linux server, I'd start by identifying the relevant log files, such as /var/log/syslog
, /var/log/auth.log
, /var/log/kern.log
, and application-specific logs. I'd use command-line tools like grep
, awk
, sed
, tail
, and less
to search for error messages, warnings, and unusual patterns. For example, grep 'error' /var/log/syslog
will show error messages. I would also check logs from most recent entries to the oldest entries to understand when certain errors started popping up.
Next, I'd correlate events across multiple log files to understand the sequence of events leading to a problem. Analyzing timestamps is crucial for this. I'd look for recurring errors or patterns that might indicate a systemic issue, resource exhaustion, or security breach. Tools like logrotate
configuration should be checked too, making sure log data has not rolled off.
18. Describe your experience with managing and troubleshooting virtualized environments on Linux using tools like KVM or Xen.
I have experience managing and troubleshooting virtualized environments primarily using KVM on Linux. My tasks have included creating and configuring VMs, managing virtual networks (bridges, VLANs), allocating storage using LVM or qcow2 images, and monitoring VM performance with tools like top
, vmstat
, and virsh
. I've also used libvirt
for managing KVM guests.
For troubleshooting, I've addressed issues such as VM performance bottlenecks (CPU, memory, I/O), network connectivity problems, and storage failures. This involved analyzing logs (e.g., /var/log/libvirt/qemu/
), using tcpdump
or Wireshark
to diagnose network issues, and employing iotop
to identify I/O intensive processes affecting VM performance. I have also used virt-viewer
to troubleshoot graphical issues with VMs.
19. Explain how you would implement and manage a VPN server on a Linux system.
To implement a VPN server on Linux, I would typically use OpenVPN or WireGuard. First, I'd install the chosen VPN server software using the package manager (e.g., apt install openvpn
or apt install wireguard
). Next, I'd configure the server by editing the configuration file (e.g., /etc/openvpn/server.conf
). Key steps include setting up the VPN subnet, defining the port and protocol (UDP or TCP), and generating the necessary certificates and keys using tools like easy-rsa
for OpenVPN or wg genkey
and wg pubkey
for WireGuard.
For management, I'd ensure the VPN service is enabled and starts on boot (systemctl enable openvpn@server
or systemctl enable wg-quick@wg0
). I'd also configure the firewall (iptables or ufw) to allow VPN traffic and enable IP forwarding. Regular monitoring of the VPN server's logs (usually in /var/log/syslog
or /var/log/openvpn.log
) is crucial for troubleshooting and security. I would set up client configurations ( .ovpn
files for OpenVPN or configuration file and keys for WireGuard) for users to securely connect to the VPN server, and distribute them via secure channel.
20. How would you troubleshoot a situation where a Linux server is running out of memory?
First, I'd identify the cause using tools like free -m
to check memory usage, top
or htop
to see which processes are consuming the most memory, and vmstat
for overall system memory statistics. I'd also check logs (/var/log/syslog
, /var/log/kern.log
) for out-of-memory (OOM) killer events.
Once the culprit is identified, the solution depends on the cause. It could involve stopping or restarting memory-hogging processes, optimizing application code to reduce memory footprint, adding more RAM if the server is consistently under-resourced, or implementing swap space as a temporary measure (though this impacts performance). For a runaway process, using tools like gdb
or memory profilers might be necessary to pinpoint memory leaks or inefficient resource utilization. If it's a java application, using jmap
and jconsole
may be necessary.
21. Describe your approach to managing and resolving software dependencies on a Linux system.
My approach to managing software dependencies on a Linux system revolves around utilizing the system's package manager (e.g., apt
on Debian/Ubuntu, yum
or dnf
on Fedora/CentOS/RHEL, pacman
on Arch Linux). I prioritize using the package manager to install, update, and remove software because it automatically handles dependency resolution. For example, using apt install <package>
will pull in any necessary dependencies. I regularly update the package lists with commands like apt update
(or the equivalent for other package managers) to ensure I have the latest versions and dependency information.
When dealing with project-specific dependencies (especially for development), I often use virtual environments (e.g., Python's venv
or Conda environments). This isolates project dependencies from the system-wide packages and prevents conflicts. I also keep project dependency lists (requirements.txt
for Python, package.json
for Node.js) to easily recreate the environment on different systems. Tools like pip
(for Python) or npm
(for Node.js) manage these project-specific dependencies. If conflicts arise, I investigate the conflicting packages, version constraints, and look for compatible versions or alternative solutions, sometimes involving manual installation or compiling from source as a last resort, but documenting the steps carefully.
22. Explain how you would configure and manage a mail server on a Linux system.
To configure and manage a mail server on Linux, I'd start by selecting an MTA (Mail Transfer Agent) like Postfix or Exim. Installation is typically done via the system's package manager (e.g., apt install postfix
or yum install postfix
). Configuration involves editing the main configuration file (e.g., /etc/postfix/main.cf
) to set parameters like hostname, domains to handle, and network interfaces to listen on. Security is crucial, so I'd configure TLS encryption for secure communication, implement SPF, DKIM, and DMARC to prevent spoofing, and potentially use a firewall like iptables
or firewalld
to restrict access to the mail server ports.
Management includes monitoring the mail server logs (usually in /var/log/mail.log
or similar), managing user accounts (often integrated with the system's user database), handling queues, and regularly updating the MTA software to patch security vulnerabilities. Tools like mailq
(to view the mail queue), postconf
(to view Postfix configuration), and tcpdump
(for network analysis) would be useful for troubleshooting. Using a configuration management tool like Ansible can also help in consistently deploying and managing mail server configurations across multiple systems.
23. How do you stay up-to-date with the latest security threats and vulnerabilities in the Linux ecosystem and apply appropriate patches and mitigations?
I stay updated on Linux security through several channels. First, I actively monitor security mailing lists like oss-security
and vendor-specific lists (e.g., Debian Security Announce, Red Hat Security Advisories). These lists provide early warnings about new vulnerabilities. I also follow security blogs, news sites, and vulnerability databases like the National Vulnerability Database (NVD) and CVE Details. Finally, I participate in security-focused forums and communities to exchange information and learn from others' experiences.
To apply patches and mitigations, I primarily use the package manager (apt
, yum
, dnf
). I regularly run commands like apt update && apt upgrade
or yum update
to install the latest security updates. I also use tools like lynis
or OpenVAS
to scan for vulnerabilities and configuration issues. In addition, I follow security best practices, such as disabling unnecessary services, using strong passwords, and configuring firewalls (e.g., iptables
or firewalld
). I configure automatic security updates where appropriate to minimize the window of vulnerability exposure.
Linux Commands MCQ
Which find
command option would you use to locate files in the current directory that were modified within the last 24 hours?
Which command will find all lines in 'logfile.txt' that contain the word 'error' but do not contain the word 'warning'?
Which command will accurately count the number of directories present in your current working directory?
Which command will recursively list all files and directories in the current directory, sorted by file size (largest to smallest)?
Options:
Which command will find and delete all empty files in the directory /home/user/docs
?
Which command can be used to identify the process currently consuming the most memory?
Which find
command will locate all files in the /home/user/documents
directory that are owned by the user 'john' and the group 'developers'?
Which command is used to display the IP address associated with a network interface (e.g., eth0) on a Linux system?
Which of the following commands will find all files larger than 10MB in the current directory?
Which command can be used to determine the user ID (UID) that a specific process is running under, given its process ID (PID)?
Options:
Which command will display the most recently modified file or directory in the current directory?
Which command is used to display the differences between two files?
Which command can be used to find the process ID (PID) of a process given its name?
Which command would you use to find the total size, in human-readable format, of all files within a directory (but not its subdirectories)?
options:
Which find
command option is used to search for files with specific permissions, such as files with read, write, and execute permissions for the owner?
Which command is used to display the last 10 lines of a file named 'example.txt'?
Which command will find all files ending with '.txt' in the current directory and its subdirectories, and replace all occurrences of the string 'old' with 'new' within those files?
Which command will find all files in the current directory that have been accessed within the last 5 minutes?
Which command will find all files in the current directory that were modified more than 7 days ago?
Which command can be used to count the number of lines, words, and characters in a file named 'example.txt'?
Which command pipeline will list the top 10 largest files in the current directory, sorted by size in descending order?
Which command will find all symbolic links in the current directory?
Which command will find files in the current directory that have been changed but not accessed in the last 7 days?
Which of the following find
commands will locate all files in the current directory that do not end with the extension '.txt'?
options:
Which of the following find
commands will locate all files in the current directory and its subdirectories that are executable by everyone (user, group, and others)?
Which Linux Commands skills should you evaluate during the interview phase?
It's impossible to fully evaluate a candidate's Linux skills in a single interview. However, focusing on core areas provides a solid understanding of their capabilities. These skills are fundamental for anyone working with Linux systems.

Basic Linux Commands
You can easily assess this with an online test. Our Linux online test includes MCQs to filter candidates with a solid understanding of the basics.
Here's a question you can ask to evaluate their understanding of basic commands.
What is the difference between cp
, mv
, and rm
commands?
Look for explanations that include copying, moving, and deleting files or directories. A good answer will also highlight the potential for data loss with the rm
command.
Shell Scripting
Assess shell scripting skills with targeted MCQs. A Shell Scripting test can help you identify candidates who can automate tasks and solve problems using shell scripts.
Here is a sample interview question you could ask.
Write a shell script to find all files in a directory older than 7 days and delete them.
The candidate should demonstrate knowledge of find
, -mtime
, and rm
commands. Bonus points if they include error handling and a dry-run option.
File System Navigation
Testing this skill with MCQs is straightforward. A broad technical aptitude assessment can help you screen candidates for familiarity with basic system concepts.
To check if a candidate is comfortable navigating the file system, ask:
How would you find a specific file if you only know part of its name and its location is unknown?
The candidate should mention using the find
command with wildcards or the locate
command. Look for them to explain how to narrow down the search to specific directories if needed.
3 Tips for Maximizing Your Linux Commands Interviews
Before you put your newfound knowledge of Linux commands interview questions to use, here are a few tips to help you conduct even more effective interviews. These tips will enable you to better assess candidates and ensure you're making the right hiring decisions.
1. Leverage Skills Assessments to Filter Candidates
Incorporating skill assessments into your hiring process significantly improves the quality of your candidate pool. Skill tests offer an objective measure of a candidate's abilities, allowing you to quickly identify those who possess the practical knowledge required for the role.
For Linux roles, consider using assessments that cover areas like basic Linux commands, system administration, or shell scripting. Adaface offers various Linux-related skill tests, including a Linux Online Test, a System Administration Online Test, and a Linux Bash Test.
By using these assessments, you streamline the interview process by focusing your time on candidates who have already demonstrated a base level of proficiency. This approach not only saves time but also increases the likelihood of making a successful hire.
2. Outline Targeted Interview Questions
Time is valuable during interviews, and you need to make the most of it. Carefully select questions that target the most important aspects of a candidate's Linux command knowledge and experience.
Prioritize questions that reveal the depth of their understanding and ability to apply commands in real-world scenarios. Consider incorporating questions related to other technical areas such as cloud computing, cybersecurity, or even soft skills like communication, depending on the role's requirements. You can find example questions in the Cyber Security interview questions page.
A well-structured interview with relevant questions maximizes your chances of accurately evaluating candidates on the most important skills.
3. Always Ask Follow-Up Questions
Don't rely solely on initial answers; always probe deeper with follow-up questions. This is the key to truly understanding a candidate's capabilities and identifying any gaps in their knowledge or experience.
For instance, if a candidate explains how to use the 'grep' command, a follow-up question could be: 'Can you describe a situation where using 'grep' with regular expressions would be particularly useful, and why?' This helps assess their practical application of the command and their problem-solving abilities.
Evaluate Linux Skills Accurately with Skills Tests
Hiring Linux professionals requires verifying their skills with accuracy. Using skills tests is the most effective way to assess a candidate's Linux command proficiency. Check out Adaface's Linux Online Test or System Administration Online Test for comprehensive skills assessment.
Once you've used skills tests to identify top candidates, you can confidently proceed to interviews. To get started with your skills assessment, visit Adaface's Online Assessment Platform and begin your journey toward hiring top talent.
Linux Online Test
Download Linux Commands interview questions template in multiple formats
Linux Commands Interview Questions FAQs
Linux command proficiency is a strong skill for many technical roles. Interview questions help assess a candidate's practical understanding and problem-solving abilities within a Linux environment.
The expected level depends on the role. Freshers should know basic commands, while experienced candidates should demonstrate expertise in scripting, system administration, and troubleshooting.
Besides asking theoretical questions, consider practical exercises or simulations where candidates can demonstrate their ability to use Linux commands to solve real-world problems. Skills tests are also a great tool.
Common mistakes include not understanding command options, incorrect syntax, lacking knowledge of scripting, and difficulty with file permissions and ownership. Recognizing these can help identify areas for development.
Problem-solving, analytical thinking, and communication skills are essential. Candidates should be able to explain their approach and reasoning when using Linux commands to address specific challenges.
Skills tests provide a standardized way to assess a candidate's competence in using Linux commands, offering insights beyond what traditional interview questions can provide. They offer an objective, hands-on evaluation.

40 min skill tests.
No trick questions.
Accurate shortlisting.
We make it easy for you to find the best candidates in your pipeline with a 40 min skills test.
Try for freeRelated posts
Free resources

