Automated RE of Kernel Configurations

Kconfig (short for kernel configuration) is a component of the the Kbuild build system for the Linux kernel. The Linux kernel is highly customizable, and configuration is required to build the kernel and generate kernel headers. In this blog post, I am introducing a new Binary Ninja plugin that analyzes Linux kernel binaries to recover kernel configuration options.

There are many reasons that one might need to recover a Linux kernel configuration post-build. My inspiration for this project is to make it easier to generate kernel headers for building LKMs that will load on target Linux devices (where source isn’t available). Linux consists of multiple mechanisms to verify LKMs during load to ensure that they are compatible and won’t cause the kernel to become unstable. By recovering a Linux kernel’s build configuration, the kernel can be built and compatible kernel headers can be generated from the upstream source. These kernel headers can be used to build LKMs that will [hopefully] load on the target device.

Intro to Kconfig

Kbuild is the Linux kernel build system. It primarily exists to parse the Kconfig macro language and set the proper flags (based on the user-provided configuration options) during build. Under the hood, it uses GNU make. The first step when building Linux is to create the .config file. This is the configuration. During build these options are used to set C preprocessor definitions, define symbols, and more. A more thorough explanation of the kernel build process can be found here. The rest of this section is focused solely on the format of the generated .config file.

Linux build configuration begins by specifying the architecture for the platform the kernel is intended to run on. When the architecture is specified, Kbuild processes the corresponding Kconfig file. The Kconfig file consists of a custom macro language that Kbuild uses to know which configuration options to set automatically and which options to ask the user to set. Tools like menuconfig build a tree-like menu that the user can edit to change options. After all the options are supplied, the .config file gets generated. This is a text file that resembles the following format:

# Automatically generated file; DO NOT EDIT.
# Linux/x86 4.19.208 Kernel Configuration

# Compiler: gcc-8 (Debian 8.3.0-6) 8.3.0

# General setup

Reverse Engineering Configuration Options

Most configuration options can be recovered by analyzing the Linux kernel binary post-build. Doing this manually is a time intensive and tedious task, depending on how many options you need to reverse engineer. This section describes how you can reverse a config option manually, and how the Binary Ninja API can be leveraged to do it for you.

The first option I will use for demonstration is the CONFIG_BUILD_SALT configuration. By looking at the upstream Linux kernel source code we can determine that the CONFIG_BUILD_SALT is used to define the utsname version member. The source code also indicates that the sched_debug_header function supplies the utsname version string as the fourth argument for a call to seq_printf.

static void sched_debug_header(struct seq_file *m)
	u64 ktime, sched_clk, cpu_clk;
	unsigned long flags;

	ktime = ktime_to_ns(ktime_get());
	sched_clk = sched_clock();
	cpu_clk = local_clock();

	SEQ_printf(m, "Sched Debug Version: v0.11, %s %.*s\n",
		(int)strcspn(init_utsname()->version, " "),

By proceeding to locate and analyze the sched_debug_header in the Linux kernel binary we can see that it corresponds with the code in the upstream kernel source, and can conclude that the fourth argument in the call to seq_printf is indeed a pointer to the utsname version string.

sched_debug_header call to seq_printf

If doing this manually, we would proceed to open our config file in a text editor and type CONFIG_BUILD_SALT="4.19.0-18-amd64". Instead, we’re going to use the Binary Ninja API to automate this operation by writing code that takes the following steps:

  1. Locate the sched_debug_header function
  2. Iterate through the function’s HLIL instructions to locate the first call to seq_printf
  3. Get the fourth argument for the call to seq_printf and verify that it is a pointer
  4. Get the string that the pointer is pointing to (the build version)
    def _recover_config_build_salt(self) -> str:
        syms ='sched_debug_header')
        if not syms:
            logging.error('Failed to lookup sched_debug_header')
            return None

        sched_debug_header =[0].address)
        if not sched_debug_header:
            logging.error('Failed to get function sched_debug_header')
            return None

        syms ='seq_printf')
        if not syms:
            logging.error('Failed to lookup seq_printf')
            return None

        call_to_seq_printf = None
        for block in sched_debug_header.high_level_il:
            for instr in block:
                if instr.operation != HighLevelILOperation.HLIL_CALL:

                if instr.dest.operation != HighLevelILOperation.HLIL_CONST_PTR:

                if to_ulong(instr.dest.constant) == syms[0].address:
                    if len(instr.params) < 3:
                            'First call in sched_debug header is not to seq_printf!?'
                        return None

                    if instr.params[
                            2].operation != HighLevelILOperation.HLIL_CONST_PTR:
                            'param3 of seq_printf call is not a pointer')
                        return None

                    s =
                    if not s:
                        logging.error('Failed to get build salt string')
                        return None

                    return s.value

Thankfully, not all of the configuration options require analyzing code. Many configuration options can be determined based on the presence of a symbol for an exported function or global data variable. An example of this type of option is CONFIG_TICK_ONESHOT. By looking at the Linux upstream source code we can see that this option is used by a Makefile to determine whether or not to use the tick-broadcast-hrtimer.o object file as part of the kernel build.

obj-$(CONFIG_GENERIC_CLOCKEVENTS)		+= clockevents.o tick-common.o
 obj-y						+= tick-broadcast.o
 obj-$(CONFIG_TICK_ONESHOT)			+= tick-broadcast-hrtimer.o

This means that if any symbols defined in tick-broadcast-hrtimer.c are in the resulting kernel build, then CONFIG_TICK_ONESHOT is set. Otherwise, it is not set. tick-broadcast-hrtimer.c exports the function tick_program_event. By writing code around the BN API, we can automate recovery of this option:

    def _set_if_symbol_present(self, name: str) -> ConfigStatus:
            return ConfigStatus.SET

    def _recover_config_tick_oneshot(self) -> ConfigStatus:
        return self._set_if_symbol_present('tick_program_event')

The code above attempts to lookup the tick_program_event symbol. If the lookup fails, then the CONFIG_TICK_ONESHOT configuration option is not set. If the lookup succeeds then it is set. There are many types of configuration options. A large portion of them can be knocked out using the symbol lookup method. Others require analyzing code, data structures, and more.

What about /proc/config.gz?

RE of the kernel binary is not always necessary to gain access to the Linux kernel configuration. Sometimes kernels are built with the following configurations options:


Kernels built with the “in-kernel configuration support” bundle contain the kernel configuration file in the kernel binary. On the running system, the configuration is exposed to user-space at /proc/config.gz. In this scenario, the config.gz archive can be copied off of the device and used to reproduce the build. However, it is my experience that most distributed Linux kernels don’t use this configuration. Hence, why it is often necessary to resort to RE.

Introducing bn-kconfig-recover

I have released a Binary Ninja plugin, bn-kconfig-recover, to automate recovery of kernel configuration options. Currently, this plugin is able to recover configuration options for general setup, the IRQ subsystem, the timer subsystem, and CPU/Task time and stats accounting. To use the plugin, create a kernel Binary Ninja database (BNDB) populated with symbols for exports from the kernel symbol table. The datavars branch of my bn-kallsyms plugin can be used to help apply symbols from /proc/kallsyms. Other methods for applying symbols exists as well (see the vmlinux-to-elf project). After creating the kernel BNDB, run the script headless. Supply the path to the kernel BNDB and the path for the output config file.

Once it is complete, it will create a configuration file containing entries for all supported configuration options.

Plugin Limitations

There are a few limitations to this plugin. First, the plugin is not complete. There are thousands of Linux configuration options. Adding support for all configuration options is work in progress. I plan to continue to add support for more options a sub-system at a time. I will gladly accept pull requests from community contributors as well. Limitations to the approach itself includes:

  • Many of the configuration options are dependent on symbols. The Linux kernel must provide symbols for exported functions and data variables in the kernel symbol table to support loading LKMs. However, if the kernel is built without LKM support (like Android kernels), the kernel doesn’t need to provide symbols and is built without a kernel symbol table. In this scenario, symbols required by bn-kconfig-recover would need to be applied manually in the BNDB. Depending on your use-case this could be a non-starter.
  • There are many kernel versions. This plugin has only been tested on 4.* kernels for x86-64. Development was done using a 4.19 kernel. As development progresses, I will likely need to change config option-specific heuristics to support multiple kernel versions and architectures. For now, there may be false positives when running the plugin on newer 5.* or old kernels (< 3.*).
  • Not all kernel developers follow the rules. Often times, engineering teams make proprietary modifications to the Linux source code. This can potentially cause recovery of certain config options to be inaccurate.


Recovering Linux kernel configurations is one example of many tedious reverse engineering tasks that can be automated. I believe this is a worthwhile pursuit that can aid in many scenarios to include LKM development, kernel exploit development, and interface compatibility development. My Binary Ninja plugin can be found here. If you are interested in this tool feel free to follow the project, submit issues, and contribute pull requests. Thanks for reading!

Crash Harnessing with Injected Code

There are many approaches to harnessing programs and instrumenting them for crash analysis and memory profiling. Each technique has benefits and drawbacks. Emulation is often the most reliable method, but requires the largest sacrifice in performance. Specialized hardware such as modern Intel processors can provide code coverage, but doesn’t necessarily provide the ability to profile memory or monitor heap usage. There is also more advanced techniques such as binary re-compilation using frameworks such as McSema/Remill and Egalito that lift compiled code to an intermediate representation to apply instrumentation and re-compile. In this blog post I describe an alternative, yet simple, proof-of-concept to harness and add basic instrumentation to a target program by using a combination of ptrace-based techniques and code injection to profile memory and monitor for crashes. The end result is a crash harness and injected shared object that hooks imported functions to profile dynamic memory and detect scenarios such as heap buffer overflow and use-after-free conditions.

Ich Crash Harness Stack Overflow Detection

Process Trace and LD_PRELOAD

Before diving into more complex implementation details, I’d like to describe the ptrace system call and LD_PRELOAD trick, two Linux operating system features that I based my design around. ptrace, or process trace, is a Linux system call that aids in debugging a running process. The best example of software that uses the ptrace system call is the GNU Debugger (GDB). ptrace allows for attaching to a remote process to trap system calls, write to virtual memory, change registers values, and more. LD_PRELOAD is an environment variable that when supplied instructs the Linux dynamic linker to load a shared object from the specified file path before all other imported libraries. An example of software that abuses the LD_PRELOAD trick is the Jynx rootkit.

Writing a Crash Harness

I started by developing a simple crash harness for x86-64 executables. The harness is designed like strace in that you run the harness and the harness runs the target program. This is acheived by forking and allowing the child process to attach to itself using PTRACE_TRACEME before executing the target program. The parent process calls waitpid in a loop to monitor the child’s status.

int main(int argc, char **argv)
    int ret = 1;
    int pid;

    if (argc < 2) {
        printf("./ich [cmd]\n");
        return 1;

    if (init_crash_harness())
        return 1;

    pid = fork();
    if (!pid) {
        /* This won't return */
    } else {
        if (!monitor_execution(pid)) {
        ptrace(PTRACE_DETACH, pid, 0, 0);

    return 0;

If the parent process receives a SIGSEGV from the child process it creates a crash dump displaying register values and virtual memory content. It also queries and dumps the base of the ELF by reading from rip into lower memory using PTRACE_PEEKDATA until the ELF header signature is discovered. The crash monitoring functionality is not unlike many other Linux crash harnesses.

LD_PRELOAD Code Injection

After writing the core of the crash harness, I focused on adding functionality to assist in injecting a shared object into the target program at runtime using the LD_PRELOAD trick. This trick is fairly simple and can be carried out from bash by executing command similar to the line below.

$ export LD_PRELOAD=/path/to/ && ./some_program

To avoid having to define the LD_PRELOAD environment variable manually during each run, I added LD_PRELOAD to environ (the harness’ environment) and linked in the shared object using .incbin. When running the harness, the shared object (described in the next section) is written to disk at a temporary path. The harness code that spawns the target process is below.

static void spawn_process(char **argv)
    char **env = NULL;
    char preload_env[256];
    size_t i = 0;

    memset(preload_env, '\0', sizeof(preload_env));
    snprintf(preload_env, sizeof(preload_env),
             "LD_PRELOAD=%s", HOOK_LIB_PATH);
    info("Setting up the environment: %s", preload_env);

    /* Get count */
    while (environ[i] != NULL)
    env = (char **)malloc(i * sizeof(char *));

    /* Copy the environment variables */
    i = 0;
    while (environ[i] != NULL) {
        env[i] = environ[i];

    /* Append LD_PRELOAD */
    env[i] = preload_env;
    env[i+1] = NULL;

    info("Executing process (%s) ...\n", argv[0]);
    ptrace(PTRACE_TRACEME, 0, NULL, NULL);
    kill(getpid(), SIGSTOP);
    execve(argv[0], argv, env);

    /* execve only returns on failure */
    err("Failed to execute binary");

Instrumentation Payload (Shared Object)

At this point, I had a crash harness that is capable of pre-loading a shared object into the debuggee and monitoring the debuggee for crashes. Next, I focused on developing the shared object that gets injected into the debuggee to hook libc imports and profile memory. I started by writing functions with identical names and prototypes as libc imports I wanted to hook. Remember, the harness executes the target program with LD_PRELOAD, which tells the dynamic linker to link in this shared object before (and any other library). The dynamic linker fills in the global offset table with offsets to the pre-loaded shared object’s versions of the target functions. As such, when the program calls malloc it execute’s the pre-loaded shared object’s malloc which reserves 8 bytes of additional memory at the beginning and end of the allocation to write tags (known sequence of random bytes). It does this by adding 16 bytes to the size parameter for malloc before calling the real libc:malloc using the following macro:

#define LOAD_SYM(sym, type, name) {         \
    if (!sym) {                             \
        sym = (type)dlsym(RTLD_NEXT, name); \
        if (!sym)                           \
            FAIL();                         \
    }                                       \

When libc:malloc returns, my malloc writes an 8 byte tag at the beginning and end of the allocation. It also stores metadata on tagged allocation in a global linked list. Then, it increments the return pointer by 8 (past the start of the tag) and returns back to the target program. Likewise, with free, my free iterates through the linked list to see if the allocation is tagged. If it is, it decrements the pointer by 8 bytes and calls libc:free to properly free the allocation.

In addition to malloc and free the pre-loaded shared object also hooks copy imports such as memcpystrncpy, and more. My memcpy calls libc:memcpy and then iterates through the global linked list containing metadata on tagged allocations and checks if any tags have been altered. If a tag is altered, the pre-loaded library forcefully crashes the program by executing an illegal instruction. This causes the crash harness to emit a crash dump.


There are many limitations to this approach. For example, dynamic memory can be modified without using imported functions. If this occurs and a tag is tainted, the instrumentation library will not detect it until the next time a hooked import is called by the program. Moreover, an OOB write could occur before the start tag or after the end tag in which case unless it causes additional memory corruption or an access violation, the program could continue to run. Also, the LD_PRELOAD trick does not work for static libraries and some linkers (such as Android’s) doesn’t support it. Limitations aside, this approach can be adapted for other use cases such as visualizing heaps or altering execution of a program for debugging purposes. It can be used in combination with other binary instrumenting techniques.

All code described in this blog post is contained in my crash harness, ich, which can be found here. Thank you for reading!