Once upon a time, I was a junior systems engineer at PCB manufacturer that made heavy use of custom Linux kernels and filesystems. Part of the job involved cross-compiling  Linux kernels on desktop computer that would be run on single board computers (SBCs)  and system-on-modules (SOMs) .
Without getting too mired in the details, let's just say that building multiple versions of software (Linux in this case) targeting a variety of hardware platforms quickly became somewhat of a dirty job from the point of view of remembering where I kept the numerous build artifacts, what changes I had made for what purpose, and whether or not the build artifacts were indeed built from the latest copy of the corresponding source code.
The use of version control software  mitigated this to a degree, but I ultimately ended up writing a kludgy, difficult-to-understand bash script  that wrapped the Linux kernel's Makefile , making use of command line parameters to produce regular results depending on my goal at the time. This allowed me to work relatively quickly and reliably reproduce previous builds or iterate on changes to source code or configuration.
Eventually I left this job for reasons. But I didn't stop building kernels on a semi-regular basis. I kind of got in the habit of customizing and building kernels for my personal computers, even though it's arguably just a nerdy obsessive/compulsive habit left over from a time when I was determined to saturate my life with Linux in order to develop a solid foundation and drive my career as a software engineer in a direction that suited my interests.
If it's not obvious at this point, I'm not just randomly rambling. I'm going somewhere with this. Today I found a need to rebuild the kernel for one of my personal computers. After not doing this since around the time a couple major Intel vulnerabilities were announced earlier this year, I approached the task somewhat trepedatiously as the scripts I wrote back in 2011 are wont to bit rot .
It's not even worth showing those scripts here. Just trust me when I say that I made things more complicated for myself than they really needed to be, and there was no small amount of naivego  involved. To paint a quick outline, these scripts consisted of:
- 2 files; an executable script and a library of bash functions
- 501 aggregate lines of code
- A mildly complex and undocumented directory structure for build artifacts
So what's my point? At this point maybe I should just admit I'm aimlessly rambling. Or maybe the point is that I couldn't get the scripts to work the way I wanted and they were so convoluted that the task of fixing them wasn't even worth my overall goal of building a working kernel for my desktop computer. The point is that today I finally let go of all that shit.
For my occasional use case of building Linux kernels targeting various computers around my house the following much more succinct wrapper Makefile is more than sufficient:
HOSTNAME ?= $(shell hostname) LOCAL_UPSTREAM ?= $(shell pwd)/src/linux TAG ?= v4.19-rc2 BUILDS_DIR ?= $(shell pwd)/build/$(HOSTNAME) BUILD_DIR ?= $(shell pwd)/build/$(HOSTNAME)/linux-$(TAG) SRC_DIR ?= $(BUILD_DIR)/src CONFIG ?= /boot/config-$(shell uname -r) THREADS ?= $$(($$(grep processor /proc/cpuinfo | wc -l)*2)) MAKE_ARGS := -j $(THREADS) -C $(SRC_DIR) -include $(SRC_DIR)/version.mk VER := $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) fetch: git -C $(LOCAL_UPSTREAM) fetch $(BUILDS_DIR): mkdir -p $(BUILDS_DIR) $(SRC_DIR): | $(BUILDS_DIR) fetch git clone --local -b $(TAG) $(LOCAL_UPSTREAM) $(SRC_DIR) || exit 0 $(SRC_DIR)/version.mk: | $(SRC_DIR) head -n 5 $(SRC_DIR)/Makefile | tail -n 4 > $(SRC_DIR)/version.mk $(eval VER := $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)) $(SRC_DIR)/.config: | $(SRC_DIR) mkdir -p $(SRC_DIR)/output cp $(CONFIG) $(SRC_DIR)/output/.config $(MAKE) $(MAKE_ARGS) olddefconfig $(BUILD_DIR)/linux-$(VER)_$(VER).orig.tar.gz: $(SRC_DIR)/.config $(MAKE) $(MAKE_ARGS) deb-pkg touch $(BUILD_DIR)/linux-$(VER)_$(VER).orig.tar.gz build: $(BUILD_DIR)/linux-$(VER)_$(VER).orig.tar.gz .PHONY: install install: build sudo dpkg -i $(BUILD_DIR)/linux-libc-dev_$(VER)-1_amd64.deb sudo dpkg -i $(BUILD_DIR)/linux-headers-$(VER)_$(VER)-1_amd64.deb sudo dpkg -i $(BUILD_DIR)/linux-image-$(VER)_$(VER)-1_amd64.deb .PHONY: clean clean: rm -rf $(BUILD_DIR)
That's only 48 lines of code yet it probably addresses 90% of my use cases in terms of being able to quickly manage builds and source code for multiple versions of Linux built for various computers. Maybe worth mentioning is that for all the time I spent earlier in this post rambling about cross-compiling kernels targeting different CPU architectures, I don't actually do any of that these days and neither does this new Makefile.
Of course this is just a first draft and will probably grow larger over time as I think of more ways to streamline my kernel build and install workflows. Some improvements I plan to make sooner or later:
- Add an upload target that uploads the resulting source code snapshot and Debian packages to a file server on my personal VPN.
- Add an update-ansible-vars target that updates variables in the Ansible playbooks I use to configure my personal computers.
That's all I've got to say about that.
|||Cross-compiling in the software development world refers to the act of compiling software on one type of computer that is meant for a different type of computer. This is a gross over simplification because, as much as I would liek to, I'm not trying to learn upon you all there is to know about computering. As an analogy, consider that just like not all forms of transportation run on the same type of fuel (eg diesel, gasoline/petrol, rocket fuel, electricity, human leg power, wind, etc), not all computers can run software compiled by other computers.|
|||A single board computer is a PCB with all essential components of a computer soldered on, typically use for industril or embedded consumer/commercial applications. ie Raspberry Pis, the computer(s) in your car, the computer in your cell phone, etc|
|||A system-on-module is similar to a single board computer except that there are typically at least two modular components; a carrier board with application-specific circuitry and a "SOM" with CPU, RAM, and a few other essential peripheral integrated circuits.|
|||Version control in software development refers to a way of using well-structured information about a set of source code files to preserve their history as changes are made by software developers.|
|||During the course of system administration and software development, IT professionals will write "scripts" that are somewhat analogous to the scripts that actors act out; only instead of actors, computers perform the activities described by these (usually) text files provided that the file is formatted correctly and the system running the scripts meets all the oftentimes implicit requirements (ie, all the expected commands and files are available).|
|||Bit rot in software development lingo describes the phenomenon of a specific piece of software becoming less useful and more unwieldy or prone to failure without regular, frequent maintence and usage as the overall computer software and hardware ecosystem steadily advances ahead of it.|
|||A little-known portmonteau of "naive" and "ego" that particularly applies in a situation where someone fancies their self clever when doing a thing but is not yet familiar enough with the overall context in which they are doing it to understand that they are foolishly disregarding the possibility of better ways.|