thoughts and stuffhttps://waynr.net/2018-09-09T23:53:00-05:00Letting Go of Ugly Code2018-09-08T00:00:00-05:002018-09-09T23:53:00-05:00Wayne Warrentag:waynr.net,2018-09-08:/posts/2018/Sep/08/letting-go-of-ugly-code/<p>Once upon a time, I was a junior systems engineer at PCB manufacturer that made
heavy use of custom Linux kernels and filesystems. Part of the job involved
cross-compiling <a class="footnote-reference" href="#id8" id="id1">[1]</a> Linux kernels on desktop computer that would be run on
single board computers (SBCs) <a class="footnote-reference" href="#id9" id="id2">[2]</a> and system-on-modules (SOMs) <a class="footnote-reference" href="#id10" id="id3">[3]</a>.</p>
<p>Without …</p><p>Once upon a time, I was a junior systems engineer at PCB manufacturer that made
heavy use of custom Linux kernels and filesystems. Part of the job involved
cross-compiling <a class="footnote-reference" href="#id8" id="id1">[1]</a> Linux kernels on desktop computer that would be run on
single board computers (SBCs) <a class="footnote-reference" href="#id9" id="id2">[2]</a> and system-on-modules (SOMs) <a class="footnote-reference" href="#id10" id="id3">[3]</a>.</p>
<p>Without getting too mired in the details, let's just say that building multiple
versions of software (Linux in this case) targeting a variety of hardware
platforms quickly became somewhat of a dirty job from the point of view of
remembering where I kept the numerous build artifacts, what changes I had made
for what purpose, and whether or not the build artifacts were indeed built from
the latest copy of the corresponding source code.</p>
<p>The use of version control software <a class="footnote-reference" href="#id11" id="id4">[4]</a> mitigated this to a degree, but I
ultimately ended up writing a kludgy, difficult-to-understand bash script <a class="footnote-reference" href="#id12" id="id5">[5]</a>
that wrapped the Linux kernel's <a class="reference external" href="https://en.wikipedia.org/wiki/Make_(software)">Makefile</a> , making use of command line
parameters to produce regular results depending on my goal at the
time. This allowed me to work relatively quickly and reliably reproduce
previous builds or iterate on changes to source code or configuration.</p>
<p>Eventually I left this job for reasons. But I didn't stop building kernels on a
semi-regular basis. I kind of got in the habit of customizing and building
kernels for my personal computers, even though it's arguably just a nerdy
obsessive/compulsive habit left over from a time when I was determined to
saturate my life with Linux in order to develop a solid foundation and drive my
career as a software engineer in a direction that suited my interests.</p>
<p>If it's not obvious at this point, I'm not just randomly rambling. I'm going
somewhere with this. Today I found a need to rebuild the kernel for one of my
personal computers. After not doing this since around the time a couple <a class="reference external" href="https://meltdownattack.com/">major
Intel vulnerabilities</a> were announced earlier this year, I approached the task
somewhat trepedatiously as the scripts I wrote back in 2011 are wont to bit
rot <a class="footnote-reference" href="#id13" id="id6">[6]</a>.</p>
<p>It's not even worth showing those scripts here. Just trust me when I say that I
made things more complicated for myself than they really needed to be, and
there was no small amount of naivego <a class="footnote-reference" href="#id14" id="id7">[7]</a> involved. To paint a quick outline,
these scripts consisted of:</p>
<ul class="simple">
<li>2 files; an executable script and a library of bash functions</li>
<li>501 aggregate lines of code</li>
<li>A mildly complex and undocumented directory structure for build artifacts</li>
</ul>
<p>So what's my point? At this point maybe I should just admit I'm aimlessly
rambling. Or maybe the point is that I couldn't get the scripts to work the way
I wanted and they were so convoluted that the task of fixing them wasn't even
worth my overall goal of building a working kernel for my desktop computer. The
point is that today I finally let go of all that shit.</p>
<p>For my occasional use case of building Linux kernels targeting various
computers around my house the following much more succinct wrapper Makefile is
more than sufficient:</p>
<pre class="code Makefile literal-block">
<span class="nv">HOSTNAME</span> <span class="o">?=</span> <span class="k">$(</span>shell hostname<span class="k">)</span>
<span class="nv">LOCAL_UPSTREAM</span> <span class="o">?=</span> <span class="k">$(</span>shell <span class="nb">pwd</span><span class="k">)</span>/src/linux
<span class="nv">TAG</span> <span class="o">?=</span> v4.19-rc2
<span class="nv">BUILDS_DIR</span> <span class="o">?=</span> <span class="k">$(</span>shell <span class="nb">pwd</span><span class="k">)</span>/build/<span class="k">$(</span>HOSTNAME<span class="k">)</span>
<span class="nv">BUILD_DIR</span> <span class="o">?=</span> <span class="k">$(</span>shell <span class="nb">pwd</span><span class="k">)</span>/build/<span class="k">$(</span>HOSTNAME<span class="k">)</span>/linux-<span class="k">$(</span>TAG<span class="k">)</span>
<span class="nv">SRC_DIR</span> <span class="o">?=</span> <span class="k">$(</span>BUILD_DIR<span class="k">)</span>/src
<span class="nv">CONFIG</span> <span class="o">?=</span> /boot/config-<span class="k">$(</span>shell uname -r<span class="k">)</span>
<span class="nv">THREADS</span> <span class="o">?=</span> <span class="nv">$$</span><span class="o">((</span><span class="nv">$$</span><span class="o">(</span>grep processor /proc/cpuinfo <span class="p">|</span> wc -l<span class="o">)</span>*2<span class="o">))</span>
<span class="nv">MAKE_ARGS</span> <span class="o">:=</span> -j <span class="k">$(</span>THREADS<span class="k">)</span> -C <span class="k">$(</span>SRC_DIR<span class="k">)</span>
<span class="cp">-include $(SRC_DIR)/version.mk
</span>
<span class="nv">VER</span> <span class="o">:=</span> <span class="k">$(</span>VERSION<span class="k">)</span>.<span class="k">$(</span>PATCHLEVEL<span class="k">)</span>.<span class="k">$(</span>SUBLEVEL<span class="k">)$(</span>EXTRAVERSION<span class="k">)</span>
<span class="nf">fetch</span><span class="o">:</span>
git -C <span class="k">$(</span>LOCAL_UPSTREAM<span class="k">)</span> fetch
<span class="nf">$(BUILDS_DIR)</span><span class="o">:</span>
mkdir -p <span class="k">$(</span>BUILDS_DIR<span class="k">)</span>
<span class="nf">$(SRC_DIR)</span><span class="o">:</span> <span class="p">|</span> <span class="k">$(</span><span class="nv">BUILDS_DIR</span><span class="k">)</span> <span class="n">fetch</span>
git clone --local -b <span class="k">$(</span>TAG<span class="k">)</span> <span class="k">$(</span>LOCAL_UPSTREAM<span class="k">)</span> <span class="k">$(</span>SRC_DIR<span class="k">)</span> <span class="o">||</span> <span class="nb">exit</span> <span class="m">0</span>
<span class="nf">$(SRC_DIR)/version.mk</span><span class="o">:</span> <span class="p">|</span> <span class="k">$(</span><span class="nv">SRC_DIR</span><span class="k">)</span>
head -n <span class="m">5</span> <span class="k">$(</span>SRC_DIR<span class="k">)</span>/Makefile <span class="p">|</span> tail -n <span class="m">4</span> > <span class="k">$(</span>SRC_DIR<span class="k">)</span>/version.mk
<span class="k">$(</span><span class="nb">eval</span> VER :<span class="o">=</span> <span class="k">$(</span>VERSION<span class="k">)</span>.<span class="k">$(</span>PATCHLEVEL<span class="k">)</span>.<span class="k">$(</span>SUBLEVEL<span class="k">)$(</span>EXTRAVERSION<span class="k">))</span>
<span class="nf">$(SRC_DIR)/.config</span><span class="o">:</span> <span class="p">|</span> <span class="k">$(</span><span class="nv">SRC_DIR</span><span class="k">)</span>
mkdir -p <span class="k">$(</span>SRC_DIR<span class="k">)</span>/output
cp <span class="k">$(</span>CONFIG<span class="k">)</span> <span class="k">$(</span>SRC_DIR<span class="k">)</span>/output/.config
<span class="k">$(</span>MAKE<span class="k">)</span> <span class="k">$(</span>MAKE_ARGS<span class="k">)</span> olddefconfig
<span class="nf">$(BUILD_DIR)/linux-$(VER)_$(VER).orig.tar.gz</span><span class="o">:</span> <span class="k">$(</span><span class="nv">SRC_DIR</span><span class="k">)</span>/.<span class="n">config</span>
<span class="k">$(</span>MAKE<span class="k">)</span> <span class="k">$(</span>MAKE_ARGS<span class="k">)</span> deb-pkg
touch <span class="k">$(</span>BUILD_DIR<span class="k">)</span>/linux-<span class="k">$(</span>VER<span class="k">)</span>_<span class="k">$(</span>VER<span class="k">)</span>.orig.tar.gz
<span class="nf">build</span><span class="o">:</span> <span class="k">$(</span><span class="nv">BUILD_DIR</span><span class="k">)</span>/<span class="n">linux</span>-<span class="k">$(</span><span class="nv">VER</span><span class="k">)</span><span class="n">_</span><span class="k">$(</span><span class="nv">VER</span><span class="k">)</span>.<span class="n">orig</span>.<span class="n">tar</span>.<span class="n">gz</span>
<span class="nf">.PHONY</span><span class="o">:</span> <span class="n">install</span>
<span class="nf">install</span><span class="o">:</span> <span class="n">build</span>
sudo dpkg -i <span class="k">$(</span>BUILD_DIR<span class="k">)</span>/linux-libc-dev_<span class="k">$(</span>VER<span class="k">)</span>-1_amd64.deb
sudo dpkg -i <span class="k">$(</span>BUILD_DIR<span class="k">)</span>/linux-headers-<span class="k">$(</span>VER<span class="k">)</span>_<span class="k">$(</span>VER<span class="k">)</span>-1_amd64.deb
sudo dpkg -i <span class="k">$(</span>BUILD_DIR<span class="k">)</span>/linux-image-<span class="k">$(</span>VER<span class="k">)</span>_<span class="k">$(</span>VER<span class="k">)</span>-1_amd64.deb
<span class="nf">.PHONY</span><span class="o">:</span> <span class="n">clean</span>
<span class="nf">clean</span><span class="o">:</span>
rm -rf <span class="k">$(</span>BUILD_DIR<span class="k">)</span>
</pre>
<p>That's only 48 lines of code yet it probably addresses 90% of my use cases in
terms of being able to quickly manage builds and source code for multiple
versions of Linux built for various computers. Maybe worth mentioning is that
for all the time I spent earlier in this post rambling about cross-compiling
kernels targeting different CPU architectures, I don't actually do any of that
these days and neither does this new Makefile.</p>
<p>Of course this is just a first draft and will probably grow larger over time as
I think of more ways to streamline my kernel build and install workflows. Some
improvements I plan to make sooner or later:</p>
<ul class="simple">
<li>Add an <cite>upload</cite> target that uploads the resulting source code snapshot and
Debian packages to a file server on my personal VPN.</li>
<li>Add an <cite>update-ansible-vars</cite> target that updates variables in the Ansible
playbooks I use to configure my personal computers.</li>
</ul>
<p>That's all I've got to say about that.</p>
<table class="docutils footnote" frame="void" id="id8" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id1">[1]</a></td><td>Cross-compiling in the software development world refers to the act of
compiling software on one type of computer that is meant for a different
type of computer. This is a gross over simplification because, as much as I
would liek to, I'm not trying to learn upon you all there is to know about
computering. As an analogy, consider that just like not all forms of
transportation run on the same type of fuel (eg diesel, gasoline/petrol,
rocket fuel, electricity, human leg power, wind, etc), not all computers
can run software compiled by other computers.</td></tr>
</tbody>
</table>
<table class="docutils footnote" frame="void" id="id9" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id2">[2]</a></td><td>A single board computer is a PCB with all essential components of a
computer soldered on, typically use for industril or embedded
consumer/commercial applications. ie Raspberry Pis, the computer(s) in your
car, the computer in your cell phone, etc</td></tr>
</tbody>
</table>
<table class="docutils footnote" frame="void" id="id10" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id3">[3]</a></td><td>A system-on-module is similar to a single board computer except that
there are typically at least two modular components; a carrier board with
application-specific circuitry and a "SOM" with CPU, RAM, and a few other
essential peripheral integrated circuits.</td></tr>
</tbody>
</table>
<table class="docutils footnote" frame="void" id="id11" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id4">[4]</a></td><td>Version control in software development refers to a way of using
well-structured information about a set of source code files to preserve
their history as changes are made by software developers.</td></tr>
</tbody>
</table>
<table class="docutils footnote" frame="void" id="id12" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id5">[5]</a></td><td>During the course of system administration and software development, IT
professionals will write "scripts" that are somewhat analogous to
the scripts that actors act out; only instead of actors, computers perform
the activities described by these (usually) text files provided that the
file is formatted correctly and the system running the scripts meets all
the oftentimes implicit requirements (ie, all the expected commands and
files are available).</td></tr>
</tbody>
</table>
<table class="docutils footnote" frame="void" id="id13" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id6">[6]</a></td><td>Bit rot in software development lingo describes the phenomenon of a
specific piece of software becoming less useful and more unwieldy or prone
to failure without regular, frequent maintence and usage as the overall
computer software and hardware ecosystem steadily advances ahead of it.</td></tr>
</tbody>
</table>
<table class="docutils footnote" frame="void" id="id14" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id7">[7]</a></td><td>A little-known portmonteau of "naive" and "ego" that particularly
applies in a situation where someone fancies their self clever when doing a
thing but is not yet familiar enough with the overall context in which they
are doing it to understand that they are foolishly disregarding the
possibility of better ways.</td></tr>
</tbody>
</table>
Developing with Flask and AWS Lambda: Intro2017-09-15T00:00:00-05:002017-09-15T00:15:00-05:00Wayne Warrentag:waynr.net,2017-09-15:/posts/2017/Sep/15/flask-lambda-testing-with-pytest-intro/<p>This is the first in a series of blog posts intended to examine some challenges
I encountered while writing a RESTful key-value store application for the coding
exercise portion of a job interview. Each post in the series will focus on a
specific challenge I encountered by describing:</p>
<ul class="simple">
<li>the problem …</li></ul><p>This is the first in a series of blog posts intended to examine some challenges
I encountered while writing a RESTful key-value store application for the coding
exercise portion of a job interview. Each post in the series will focus on a
specific challenge I encountered by describing:</p>
<ul class="simple">
<li>the problem</li>
<li>my approach to solving the problem, including alternative solutions considered</li>
<li>specifics of and rationale behind the solution i chose</li>
<li>additional problems, if any, that arose from the chosen solution</li>
</ul>
<p>But first, I should begin with some background. A couple weeks ago I began
interviewing with a development team that was looking for a Python developer;
they wanted experience with or the capability of learning how to write, test,
and deploy AWS Lambda based applications written in Python. During the initial
phone screen with the team lead, I did my best to make the following clear:</p>
<ul class="simple">
<li>I have relatively little experience with AWS in general</li>
<li>I had no experience whatsover with AWS API Gateway or AWS Lambda</li>
<li>My background largely consisted of "DevOps"/SRE/automation work, although
I have made solid efforts over the past few years to transition into more of a
software development role.</li>
</ul>
<p>Despite such admonitions regarding my lack of experience, after chatting with
the team lead, I was sent a coding challenge intended to gauge both my
preexisting Python development experience as well as my ability to wrap my head
around AWS API Gateway and AWS Lambda.</p>
<p>As a side note, I'm really not personally too keen on giving in to the growing
marketshare of computational power by giants like Google, Microsoft, AWS, etc.
However, I understand that the fast-paced, competitive nature of the business
world doesn't hold space for consideration of the kind of personal ideals that
lead to concerns like mine; and realistically, the job market will shift one way
or another with or without my personal approval. I'll make a note to address
this in a maybe more meandering quasi-philosophical rant blag post another day.</p>
<p>That said, I did see this as an opportunity to practice my software design and
programming skills; I love coding challenges as part of interview processes
precisely for this reason. (I suspect my attitude would be different if I were
working full time while interviewing for jobs; but this summer I have been
full-time job searching and this is the third coding challenge to which I have
committed serious concerted effort)</p>
<p>Anyway, the requirements for this app were as follows:</p>
<pre class="literal-block">
Implement the requirements below to create a basic key/value service:
* uses Python
* create a RESTful API, we suggest using AWS Lambda or Flask but use
whatever you are comfortable with.
* the route should be something like /key but should be versionable
* show some example uses of the service; use cases:
* user should be able to get all keys/values
* user should be able to get a specific key/value
* user should be able to add a key/value
* user should be able to update a key/value
* user should be able to delete a key/value
* enable the use of 2 different backing stores of your choice. they can
use real data stores or be mocked out to represent. which one is used
should be determined via configuration
* demonstrate asynchronous handling
* simultaneous (make 2 or more calls that are processed asynchronously and
when all calls complete results are compiled to a single result object
which is returned)
* chained (make a call the result of which informs a subsequent call)
* The code should be runnable and have some form of demonstration. For
example, a user would add a key of 'sports', its value the list of
'baseball', 'hockey' and 'football'.
* The code should have automated tests
* Share via code repository or zip (repo preferred)
</pre>
<p>Here's the solution I came up with: <a class="reference external" href="https://bitbucket.org/waynr/simple-kvstore">https://bitbucket.org/waynr/simple-kvstore</a></p>
<p>The key-value store app itself is really nothing to get excited about. What I
think is most interesting about it and where I put the most amount of effort is
in the test fixture setup. With that in mind, this series of blog posts (of
which this is the first) will largely cover my approach to testing the
application, with references to the application code itself kept to a minimum.
Tentatively speaking, this series will include roughly the following topics
organized into a rough weekly publication schedule:</p>
<ul class="simple">
<li><strong>Week 0</strong>: Developing with Flask and AWS Lambda: Intro (this post)</li>
<li><strong>Week 1</strong>: Project management; technology stack selection; initial project
setup</li>
<li><strong>Week 2</strong>: Analyzing the requirements for testability; test early, test often</li>
<li><strong>Week 3</strong>: Abstractions: the lifeblood of reusable code and other such musings</li>
<li><strong>Week 4</strong>: Pytest fixtures: now with hacky multiple dispatch</li>
<li><strong>Week 5</strong>: Adapting WebTest and WSGIProxy2 to my way of thinking</li>
<li><strong>Week 6</strong>: Hacking the Zappa CLI for fun and profit</li>
</ul>
<p>And that's it for this post! Apologies for not yet having set up any kind of
comment system on this blag. Maybe I'll work on that in the near future...</p>
First Post2017-08-09T18:30:00-05:002017-08-09T18:30:00-05:00wayne warrentag:waynr.net,2017-08-09:/posts/2017/Aug/09/first-post/<p>Welcome to my first blog post. I don't quite know what I'll write about yet, but
I imagine it may include any/everything from movie, book, music, and open source
software reviews to random philosophical musings and commentary on current
events.</p><p>Welcome to my first blog post. I don't quite know what I'll write about yet, but
I imagine it may include any/everything from movie, book, music, and open source
software reviews to random philosophical musings and commentary on current
events.</p>