<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.2.2">Jekyll</generator><link href="https://packman.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://packman.io/" rel="alternate" type="text/html" /><updated>2025-04-02T14:11:51+00:00</updated><id>https://packman.io/feed.xml</id><title type="html">Jesse Portnoy</title><subtitle>A freelancer multidisciplinary programmer (they call it full-stack nowadays but I prefer the former term).</subtitle><author><name>Jesse Portnoy</name></author><entry><title type="html">Recruitment - overhaul required, urgently (part II)</title><link href="https://packman.io/2025/02/05/Recruitement-overhaul-required-1.html" rel="alternate" type="text/html" title="Recruitment - overhaul required, urgently (part II)" /><published>2025-02-05T15:47:06+00:00</published><updated>2025-02-05T15:47:06+00:00</updated><id>https://packman.io/2025/02/05/Recruitement-overhaul-required-1</id><content type="html" xml:base="https://packman.io/2025/02/05/Recruitement-overhaul-required-1.html"><![CDATA[<p>This is the second instalment in a series. In <a href="https://packman.io/2025/01/25/Recruitement-overhaul-required.html">part I</a>, I’ve covered the importance of providing a proper job description. This part will focus on the current approach towards resumes and why it so painfully fails to serve the end goal.</p>

<h3 id="the-resume-format">The resume format</h3>

<p>While the means of initial filtering drastically changed (from a manual review by a competent human to automatic parsing by software), the format has not changed at all and, worst still, there’s no universally adhered standard to use for the parsing/data extraction process.</p>

<p>Let’s break this problem down to its core components:</p>

<p><strong>The format is very loose and clearly aims to accommodate human thought and analysis patterns, not those of computer algorithms.</strong></p>

<p>When I say the format is “loose”, I am referring to the fact that, while there’s an abundance of (sometimes contradicting) guidelines, there’s no standard. In particular (and this is important for automated parsing), there’s no unified layout and there are several different common file formats, each with its own sub-formats.</p>

<p>PDF (Portable Document Format) is one such common “format”. Here’s a short blurb from <a href="https://en.wikipedia.org/wiki/PDF">Wikipedia</a>:</p>

<p><em>PDF files may contain a variety of content besides flat text and graphics including logical structuring elements, interactive elements such as annotations and form-fields, layers, rich media (including video content), three-dimensional objects using U3D or PRC, and various other data formats.</em></p>

<p>My own resume is a PDF generated using Latex, from a template I found in this <a href="https://github.com/posquit0/Awesome-CV/tree/master/examples/cv">Git repo</a>. I am also a co-maintainer on <a href="https://github.com/solworktech/mdtopdf">mdtopdf</a> - a project that produces PDFs from Markdown.</p>

<p>PDFs are easy to produce and look good to the human eye and, as Wikipedia puts it, PDF is <em>a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software”</em>. The “independent of application software” bit is rather important because it means I do not have to use MS Word (by the way, there are several, very different, MS Word formats, since not all versions use the same format by default) or, more relevant to my case (since I use Linux exclusively on all my personal equipment), LibreOffice (in the hope that it will look okay on whatever MS Word version the recruiter opens it with).</p>

<p>Let’s focus on this bit from the last paragraph: <strong>look good to the human eye</strong>.</p>

<p>Allow me to regale you with a short quote I encountered in <a href="https://www2.seas.gwu.edu/~kaufman1/FortranColoringBook/FORTRAN%20Coloring%20Book%20Full.pdf">The FORTRAN colouring book</a> by Dr. Roger Emanuel Kaufman:</p>

<blockquote>
  <p>Before you try to solve a problem using a computer, it’s only fair that I tell you a few things about what computers are like.</p>
</blockquote>

<blockquote>
  <p>Once knew a fellow who called up a computer dating service, Whey tried to fix him up with an IBM machine. Needless to
say, it didn’t work out, He was witty, handsome, a dashing bon vivant raconteur - terribly debonair. The computer was dumb, unimaginative, totally lacking in creativity.
He spoke fluent Francais - the machine only understood FORTRAN. The poor computer couldn’t follow his exuberant dialogue - it could comprehend only the simplest of grammatical constructions, of course, it didn’t work out. For one thing, the computer was a lousy cha cha dancer. For another thing, there was a difference of religion. The computer thought it was God and he was upset about how they’d raise the children.</p>
</blockquote>

<blockquote>
  <p>To be fair, we should recognize that computers have many good points as well, true, they are basically stupid. However, they have Great memories, they are incredibly fast, they are terrific typists, and they make good money, last but not least, Zey vill follow inztrukshuns to zee letter!</p>
</blockquote>

<p>This book was published in 1978. We’ve made loads of technological advancements since but the general message remains very true today (and those who go <em>“I built software with chatGPT without learning how to code”</em> or <em>“Soon developers will be made obsolete”</em> will be well advised to read it).</p>

<p>The takeaway is simple: machines and humans are very different in how they interpret/process data. One salient point to understand is this: machines do not <strong>think</strong>, they <strong>compute</strong>, which is why they are called computers, not “thinkers”, it’s all in the name.</p>

<p>Parsing a PDF (or DOCX or other common formats, for that matter) resume, intended for the human mind is quite a complex task for a computer. Before we go on, let’s define <strong>parsing</strong> in this context:</p>

<blockquote>
  <p>In computer science, parsing is the process used to analyze and interpret the syntax of a text or program to extract relevant information.</p>
</blockquote>

<p>Humans can do this easily and intuitively. Machines can be trained to parse resumes written with humans in mind but it takes an effort and is very error-prone. As I reckon many non-programmers are aware, machines ultimately deal with binary sequences (0s and 1s - bit on, bit off - if it makes you think of Karate Kid, we think alike:)). They can be taught to transform/represent a large variety of data pieces into/in these sequences but some formats make it far easier than others.</p>

<p>When writing software, one will never opt to store/pass data in the form of a PDF. Over the years, we’ve come up with an abundance of formats that are a compromise between what humans and machines “understand” best (in quotes because machines don’t really understand anything, they compute and process).</p>

<p>If you’re interested, you can look up a few common ones: <code class="language-plaintext highlighter-rouge">XML</code>, <code class="language-plaintext highlighter-rouge">INI</code>, <code class="language-plaintext highlighter-rouge">JSON</code>, <code class="language-plaintext highlighter-rouge">YAML</code>; there are, of course, many more.</p>

<p>Let’s use <code class="language-plaintext highlighter-rouge">JSON</code> for illustration purposes. It’s a reasonable choice as it’s one of the most popular ones, certainly when it comes to APIs (Application Programming Interface). Similar to HTTP, even if you’ve no idea what the term means, I assure you that you make use of APIs on a daily basis. Educating people on these concepts is a laudable aspiration but as it’s not the purpose of this particular series, I’d suggest looking that term up, rather than cover it in this text. <code class="language-plaintext highlighter-rouge">JSON</code> handles nested hierarchies well and we need that to represent a person’s resume.</p>

<p>If I wanted to format the data in <a href="https://packman.io/img/jesse_portnoy_resume.pdf">my resume</a> as a <code class="language-plaintext highlighter-rouge">JSON</code>, it would look something like this:</p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Jesse Portnoy"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"email"</span><span class="p">:</span><span class="w"> </span><span class="s2">"jesse@packman.io"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"github_profile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"https://github.com/jessp01"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"gitlab_profile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"https://gitlab.com/packman.io"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"personal_site"</span><span class="p">:</span><span class="w"> </span><span class="s2">"http://packman.io"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"linkedin_profile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"https://www.linkedin.com/in/jesse-portnoy-4921752"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"title"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Multidisciplinary Programmer, Builder &amp; Packager, Automation Engineer"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"skills"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"programming"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
      </span><span class="s2">"C"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"PHP"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Perl"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"JavaScript/NodeJS"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Dart/Flutter"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"BASH"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Go"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"SQL"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Ruby"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Python"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"C#"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Java"</span><span class="w">
    </span><span class="p">],</span><span class="w">
    </span><span class="nl">"devops"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
      </span><span class="s2">"LAMP"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Nginx"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"AWS"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Google Cloud Platform"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Docker"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Vagrant"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Travis CI"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"GH Actions"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Ansible"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Chef"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Nagios"</span><span class="p">,</span><span class="w">
      </span><span class="s2">"Prometheus"</span><span class="w">
    </span><span class="p">]</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"experience"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
    </span><span class="p">{</span><span class="w">
      </span><span class="nl">"company"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Sirius Open Source Inc"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"start_time"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2023-09-18T09:00:00Z"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"end_time"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2024-10-01T17:00:00Z"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"title"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Senior Developer"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Things I did go here"</span><span class="w">
    </span><span class="p">},</span><span class="w">
    </span><span class="p">{</span><span class="w">
      </span><span class="nl">"company"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Kaltura"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"start_time"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2012-02-01T09:00:00Z"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"end_time"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2023-03-31T17:00:00Z"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"title"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Senior Developer"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Things I did go here"</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p>The above is of course, truncated but you get the point. See what I mean by a <strong>compromise between what humans and machines “understand”</strong>? Most humans would definitely rather read the PDF but they could still read the data when formatted this way and it would make sense to them; computers have no preferences (as much as I love them, I make an effort not to humanise computers, I think it’s important) but parsing this <code class="language-plaintext highlighter-rouge">JSON</code> will be faster and less error-prone for them (though, not as fast as a binary sequence).</p>

<p>At this point, you may be thinking:</p>

<blockquote>
  <p>Okay, I got it, machines and humans process things differently and some formats are better suited for one rather than the other but surely, you don’t expect Salesman Smith and Solicitor Jones to manually format their resume this way to aid computers! I mean, they’re meant to be aiding us!</p>
</blockquote>

<p>To which I say:
<strong>NO</strong>, I do not expect any human to manually format their resume as a <code class="language-plaintext highlighter-rouge">JSON</code>. In fact, I don’t want to do it either; while I can do that (and it’s not <strong>that</strong> hard), there’s no reason for anyone (programmer or not) to struggle with finding a missing <code class="language-plaintext highlighter-rouge">,</code>, <code class="language-plaintext highlighter-rouge">:</code> or <code class="language-plaintext highlighter-rouge">[</code>] in the above <code class="language-plaintext highlighter-rouge">JSON</code>. Instead, we should:</p>

<ul>
  <li>Agree on a standard (any standard, even a very flawed one, is better than no standard at all and standards evolve, to address needs)</li>
  <li>Write software that takes input in the manner humans are used to and <strong>automatically</strong> transforms it into such a <code class="language-plaintext highlighter-rouge">JSON</code> (or <code class="language-plaintext highlighter-rouge">YAML</code>, or whatever, really - something a computer does not have to work hard to parse - and remember, it’s not just about how hard it works, it’s also about how likely it is to make a mistake)</li>
</ul>

<p>Trust me, it’s very easy. Recruitment software is largely a proprietary/close source industry (which is part of the problem, actually) but I was able to find a project called <a href="https://www.open-resume.com">open-resume</a>. <a href="https://github.com/xitanggg/open-resume">open-resume</a> is FOSS (licensed under AGPLv3, written mostly in TypeScript) and it provides two main functionalities:</p>

<ul>
  <li>Parsing existing PDFs (https://www.open-resume.com/resume-parser)</li>
  <li>Producing PDFs from inputs provided in an HTML form (https://www.open-resume.com/resume-builder)</li>
</ul>

<p>The latter functionality can easily be extended to produce, alongside the PDF, a <code class="language-plaintext highlighter-rouge">JSON</code> similar to the one in my example above.</p>

<p>If we all agreed on a standard, people could easily generate and submit both files when applying for a job, making it much easier for all involved.</p>

<p>The format aside, there’s another important point to consider: if you consult any resume “expert” (and there are plenty of those) or perform an online scan of yours, they will all tell you that a resume must be two pages max (and some will even say it must be under 800 words long). This is very restrictive, especially when it comes to roles where so much emphasis is put on one’s technical experience and level of expertise with given technologies (programming languages, frameworks, platforms, etc) and considering the overwhelming abundance of these. Going back to the quote from <a href="https://www2.seas.gwu.edu/~kaufman1/FortranColoringBook/FORTRAN%20Coloring%20Book%20Full.pdf">The FORTRAN Colouring book</a></p>

<blockquote>
  <p>we should recognise that computers have many good points as well. True, they are basically stupid - however, they have great memories, they are incredibly fast</p>
</blockquote>

<p>To a human, each additional page translates to several minutes of processing time (hence why this limitation was introduced in the first place) but to a machine, it’s utterly negligible, especially with the resources we have at our disposal today (the computing power on your phone alone is far greater than what the machines that send people up to space in 1969 had). It reminds me of a story I heard, about a bureaucratic department where a paper form had to be filled with <strong>blue ink</strong>. When someone finally questioned it, they found that this regulation was established because the machine used for scanning said form had difficulties with other ink colours. <strong>That machine was made obsolete decades prior and the new processing method did not have these limitations</strong>. Something to think about.</p>

<p>This feels like a good stopping point.</p>

<p>In this instalment, I attempted to explain why the way we represent resume data, as well as the restrictions we enforce on said data, need to change to better leverage the fact that we, for years now, have machines that can help us perform the initial resume filtering and analysis. In the next instalment, we’ll cover the methods (and people) we currently use to evaluate resumes and why these must also be changed (drastically). I’ll share my experience when querying LLMs about my own resume (as I suspect many recruiters do) and demonstrate how misleading the results can be (hint: very!).</p>

<p>As noted in <a href="https://packman.io/2025/01/25/Recruitement-overhaul-required.html">part I</a>, I welcome comments and discourse and have started formulating a plan to address these issues. I firmly believe it’s more than feasible and, from a technical standpoint (the human aspects are a different story), not overly complex. If you care, please take part and voice your opinions.</p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[This is the second instalment in a series. In part I, I’ve covered the importance of providing a proper job description. This part will focus on the current approach towards resumes and why it so painfully fails to serve the end goal.]]></summary></entry><entry><title type="html">Of secrets and histories</title><link href="https://packman.io/2025/02/04/gitleaks.html" rel="alternate" type="text/html" title="Of secrets and histories" /><published>2025-02-04T17:37:37+00:00</published><updated>2025-02-04T17:37:37+00:00</updated><id>https://packman.io/2025/02/04/gitleaks</id><content type="html" xml:base="https://packman.io/2025/02/04/gitleaks.html"><![CDATA[<p>A too common and dangerous mistake I encounter is source repos that either have secrets (sensitive data) committed to them or those where secrets were removed from the files but not purged from the commit history. The latter is even more common but it’s no less serious because it’s very easy to fish these out and is the first thing I’d look for if I were an attacker.</p>

<p>So, how do we avoid this potentially most expensive mistake?</p>

<ul>
  <li>Never commit secrets to any repo. You might think that if the repo is private, this isn’t a problem; if so, you’d be wrong:)
Firstly, it may be private today but tomorrow? Who knows… (as an avid FOSS supporter, I certainly want all code to be open and there are plenty of examples of commercial entities that opted to do just that with their previously closed/proprietary repos).</li>
</ul>

<p>Secondly, even if the repo will forever remain private, that’s not to say the source will not end up in the hands of less-than-benevolent people. So, what to do to protect our secrets?</p>

<ul>
  <li>Consider using a secret management system (there are loads, FOSS and otherwise - run a search, you may already be using a platform that offers this particular service)</li>
  <li>However way you choose to store your secrets, make sure that any files containing them are included in the <code class="language-plaintext highlighter-rouge">.gitignore</code> file (if you use a different source control, find your counterpart, they all have them)</li>
</ul>

<p>The above actions, good as they are, require all humans involved to adhere to these policies and, as you surely know, people often don’t. Luckily, you can make them:) <a href="https://github.com/gitleaks/gitleaks">Gitleaks</a> is a project that aims to do just that. In its README, you’ll find a full GH Action example (similar hooks for other platforms are available, though writing your own is also easy enough) that runs this tool, as well as pre-commit one; and you <em>do</em> want both as it’s far better to catch these things <em>before</em> you commit. Make sure you include a note about how to set up the pre-commit hooks in your README because, once these things get into the commit log, purging them is, well, involved and annoying:)</p>

<p>Hopefully, you are now convinced that you should set these hooks up on any repo that includes (or may in the future include) sensitive data but what about all your existing repos and their commit histories? Well, as a first step, use <code class="language-plaintext highlighter-rouge">gitleaks</code> to detect compromised repos. Next, if you use Git (notice that Git is NOT shorthand for GitHub; in 2025, there’s a high chance you are using Git, no matter what platform you host your code on), take a look at <a href="https://github.com/newren/git-filter-repo">git-filter-repo</a> which is designed to assist with this purging task. You may also want to read <a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository">Removing sensitive data from a repository</a>.
If you’re not using Git, check your source control documentation for its counterpart purging option.</p>

<p>Don’t be that person who could have prevented a crucial and embarrassing breach, but didn’t:)</p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[A too common and dangerous mistake I encounter is source repos that either have secrets (sensitive data) committed to them or those where secrets were removed from the files but not purged from the commit history. The latter is even more common but it’s no less serious because it’s very easy to fish these out and is the first thing I’d look for if I were an attacker.]]></summary></entry><entry><title type="html">Matrix Reloaded - Reflections</title><link href="https://packman.io/2025/02/04/matrix-reloaded-reflections.html" rel="alternate" type="text/html" title="Matrix Reloaded - Reflections" /><published>2025-02-04T14:53:37+00:00</published><updated>2025-02-04T14:53:37+00:00</updated><id>https://packman.io/2025/02/04/matrix-reloaded-reflections</id><content type="html" xml:base="https://packman.io/2025/02/04/matrix-reloaded-reflections.html"><![CDATA[<p>Amazon Prime currently offers all Matrix films for free.
I watched the trilogy in my late teens and early twenties. What left the biggest impression on me back then wasn’t the films themselves (honestly, other than “they have really cool combat scenes” there’s nothing impressive to say) but rather an observation that a mate in our group made. He said: there’s absolutely no emotion here whatsoever. For those who don’t know, I’m autistic and to be honest, at that point in my life, I didn’t quite understand what he meant by that. I’m still autistic, of course, but it is known that many of us do improve (a controversial word) in the emotional department over time. It’s not that my thought patterns changed but one of the things people with Asperger’s tend to excel at is pattern-matching and I now have a far larger sample to analyse and compare with so watching it 22 years later, I do see it. It IS a super weird directing choice. It doesn’t make the characters cooler (in the positive sense, “cool” is an ambiguous word which can also be interpreted as “detached”), it makes them unrelatable.</p>

<p>Also, and it may sound odd to some but if you stop to think about it, I reckon you’ll agree that there are way too many combat scenes, each far longer than it needs to be. It takes away from the plot and alters your excitement threshold to a point where you’re thinking: “Yes, I got it. Neo is the one, he’s a demigod and he can fly like Superman (only his outfit makes him look cooler - again that word) and beat everyone’s arse. Can we get on with the story, please?”</p>

<p>Lastly, while watching the <a href="https://www.youtube.com/watch?v=0PxTAn4g20U">Trinity computer break-in scene</a> in part II (Reloaded), I again marvelled at why Hollywood can’t get these scenes right. I mean, seriously, they pour millions of dollars into these productions, how much would it cost to hire a consultant like me to make it believable? I’d have done it for the relatively humble sum of $100 (plus credits).</p>

<p>Unlike most films, it seems that they DID consult someone because the use of actual utilities is depicted in the scene (<code class="language-plaintext highlighter-rouge">nmap</code> and <code class="language-plaintext highlighter-rouge">ssh</code>) and the “CRC32 exploit” reference alludes to a real CVE (see <a href="https://marc.info/?l=bugtraq&amp;m=98168366406903&amp;w=2">2001-0144</a>) and clearly, someone also told them it’s more realistic for the fictional <code class="language-plaintext highlighter-rouge">sshnuke</code> util to reset root’s passwd rather than output the existing one (because passwds are hashed).</p>

<p>To me, it makes the scene more disappointing in a way. Why? Let’s analyse.</p>

<p>Here’s how the scene goes down:</p>

<p>Trinity sits down and gets to typing. She inputs very few characters and then we’re shown:</p>

<ul>
  <li>The truncated output of <code class="language-plaintext highlighter-rouge">nmap</code> in a readable resolution (but not the arguments used). Seconds later, we’re shown the full screen from a farther angle. I couldn’t read from it but I did find <a href="https://nmap.org/movies/#matrix">this article</a> on nmap.org so I know the full command was <code class="language-plaintext highlighter-rouge">nmap -v -sS -O 10.2.2.2</code></li>
  <li>full command (arguments included) for the fictional <code class="language-plaintext highlighter-rouge">sshnuke</code> util</li>
  <li>full <code class="language-plaintext highlighter-rouge">ssh</code> command to get to the mysterious 10.2.2.2</li>
  <li>at this point, she lands in the “RRF-CONTROL” shell (convenient!) and types: <code class="language-plaintext highlighter-rouge">disable grid nodes 21 - 48</code></li>
</ul>

<p>Let’s break this down:</p>

<ul>
  <li>She already has access to a super user shell (presumably, based on the fact <code class="language-plaintext highlighter-rouge">PS1</code> starts with “#” and the use of <code class="language-plaintext highlighter-rouge">-sS</code>). But you know what? I’ll buy that because the scene opens with two security guards looking at a passed-out person over their desk, wondering “what the hell happened here?!”. So, okay, that person had the terminal open and something happened and Trinity walked into that situation, fine.</li>
  <li>10.2.2.2 is a class A (CIDR) addr. Okay but how did she know that’s the machine to SSH to in order to get to the “RRF-CONTROL” interface? I mean, you want to depict the use of <code class="language-plaintext highlighter-rouge">nmap</code>? how about using <code class="language-plaintext highlighter-rouge">-sn</code> for host discovery and working things out from there?</li>
  <li>She invokes the <code class="language-plaintext highlighter-rouge">nmap</code> with <code class="language-plaintext highlighter-rouge">-v -sS -O</code>. Let’s break these down as well:</li>
  <li><code class="language-plaintext highlighter-rouge">-v</code> stands for verbose (unimportant)</li>
  <li><code class="language-plaintext highlighter-rouge">-sS</code> is SYN stealth scan (half-open scan) - a technique that sends SYN packets without completing the full TCP handshake. That’s the default Nmap scan type when running with super user privileges so she could have saved some typing but still, okay:)</li>
  <li>But now, we get to <code class="language-plaintext highlighter-rouge">-O</code>: Enable OS detection. This one is super important because it makes the scanning significantly slower. 
On my own laptop <code class="language-plaintext highlighter-rouge">nmap -v -sS -O localhost</code> takes roughly 12 seconds, whereas without <code class="language-plaintext highlighter-rouge">-O</code>, it takes a little over 0.900 seconds (try it, you’ll see).</li>
</ul>

<p>And that brings us to the real question: why does she even need <code class="language-plaintext highlighter-rouge">nmap</code> here? CVE 2001-0144 affected many different OSes and SSH servers, not a particular one. Even if she wants to know the version of the SSH server, <code class="language-plaintext highlighter-rouge">telnet 10.2.2.2 22</code> is much faster. TCP 22 is the default port, which frankly, is the only insight <code class="language-plaintext highlighter-rouge">nmap</code> gives her (notice the “No exact OS match” in the output). <code class="language-plaintext highlighter-rouge">nmap</code> in this situation, would make sense if trying TCP 22 FAILED and she needed to discover what alt port the service listens on. Moving on…</p>
<ul>
  <li>This <code class="language-plaintext highlighter-rouge">sshnuke</code> util, bloody F is it doing on that machine? At least show her mount a USB stick to run this thing?!</li>
  <li>This “RRF-CONTROL” shell:</li>
  <li>How does she know its sub-commands?</li>
  <li>3 separate args to denote the node range (21,-,48), seriously?!</li>
</ul>

<p>Hollywood producers: if you happen to encounter this post, I’m “open for work”. I can help you make these scenes realistic while still keeping them tight (timeline-wise) and fast-paced:)</p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[matrix, absolutely no emotion whatsoever, unrealistic hack scene, CRC32 exploit, trinity, ssh, imap, autistic]]></summary></entry><entry><title type="html">Recruitment - overhaul required, urgently</title><link href="https://packman.io/2025/01/25/Recruitement-overhaul-required.html" rel="alternate" type="text/html" title="Recruitment - overhaul required, urgently" /><published>2025-01-25T19:13:06+00:00</published><updated>2025-01-25T19:13:06+00:00</updated><id>https://packman.io/2025/01/25/Recruitement-overhaul-required</id><content type="html" xml:base="https://packman.io/2025/01/25/Recruitement-overhaul-required.html"><![CDATA[<p>The recruitment industry is due for a massive overhaul.</p>

<p>This has been the case for many years, possibly decades; however, recent circumstances made it far more pressing.
Many people, myself included, have been ranting about it but I’ve yet to see a concrete plan to address the current state of affairs.</p>

<p>In this proposed series of articles (perhaps the word manifest is warranted), I will attempt to break the problem into its sub-components and propose detailed, possible solutions. I will focus on my industry (to wit: software development - stick with what you know is always a good rule); however, many of my salient points also apply to other domains and I will leave this extrapolation as an exercise to the reader (for now, at any rate).</p>

<p>At times, for the sake of brevity, rather than detail everything that is being done wrong, I will simply outline how it should be done, instead. 
Comments are most welcome.</p>

<h2 id="candidate-filtering">Candidate Filtering</h2>

<p>The first step when compiling a pool of quality candidates is to provide a concise, accurate job description. This is the cornerstone of the entire process and failing to do so will severely hinder the overall process and have a heavy effect on the overall result. <strong>If you can’t communicate what you’re looking for, your chances of finding it dramatically decrease</strong>.</p>

<h3 id="job-descriptions">Job Descriptions</h3>

<p>Job descriptions must be written by those who, at least at some point in their career, have done the job or, failing a perfect match (perfection is to be chased, hardly ever to be achieved), a comparable job, with contributions from those who are now managing people who perform this exact role or one most similar to it. As with many things (though not all), the quality of the description is likely to improve by requesting inputs from multiple individuals in different roles.</p>

<p>For example, when looking for a PHP developer to join a team of five, the description should be compiled based on inputs from:</p>
<ul>
  <li>The direct manager of said team</li>
  <li>Two leading team members (establishing who these two are is a challenge in its own right; one that accounts for many of the problems covered in this essay but we’ll get to that)</li>
  <li>One or two representatives from units the team in question works most closely with (QA, Project Management, Account Management, Support, etc)</li>
</ul>

<p>Why is this so important?
At the risk of stating the obvious, the objective of this elaborate process is to locate candidates with the highest chance of succeeding on the job. For this, one must first describe the day-to-day nature of the role:</p>

<ul>
  <li>
    <p>Compensation range (a meaningful one - not 50k-120k) and location (on-site, hybrid [and if so where] or remote).</p>
  </li>
  <li>
    <p>The technical work; which, in the case of software development, is done chiefly by conversing with machines (if that’s not the case for your company, you’re doing it wrong!). This type of communication, however, has many subtleties. While all programming languages (and this applies to frameworks and operating systems as well) share many common similarities, and a good programmer can certainly adapt and learn on the go (in fact, if one can’t, one should seriously consider a change in direction), there are also many differences and, if one truly loves this particular vocation, one will develop specific opinions, tastes and preferences and these will have a significant effect on one’s productivity and satisfaction. As I write this, I am so tempted to illustrate this point with detailed examples; I will delay my gratification for the time being and instead, state the bottom line: <strong>the job description must include a high-level description of the technical stack one will be working with</strong>. The stack will change over time (revisit the point about adaptability) but, in most cases, not drastically.</p>
  </li>
  <li>
    <p>The human interfaces; with the former point in mind, ultimately, we do not write software to entertain machines
(that’s the lovely thing about machines - they haven’t such needs); we write software for (and often with) humans,
with the aid of machines. A programmer is, in essence, an interface between the machine and various human players
(end-users and company stakeholders). A role can be most interesting and satisfying from a technical standpoint and
still be a nightmare if the human elements are misaligned. The job description should strive to depict this
collaboration as accurately as possible; to wit: what sort of people will one collaborate with as part of the role, using what means/protocols/systems?</p>
  </li>
</ul>

<p>This last statement (put in the form of a question) requires clarification, lest it be misunderstood. By “what sort of people”, I do not mean a meaningless list of superlatives (best, talented, kindest, brightest, supporting, open-minded, you get the idea). There’s an astounding amount of that tripe in many job descriptions (and resumes). Not only is this meaningless (self-praise without supporting evidence always is), it is, in fact, <strong>counterproductive</strong>, as it makes the description far longer and is typically placed in the beginning, which means that, by the time you get to the important bits (if you haven’t given up altogether), you suffer from fatigue.</p>

<ul>
  <li>A succinct blurb about the company. With succinct being the operative word (again, not an abundance of self-praise and
lengthy paragraphs about its love for diversity and support of world peace). Just facts: the vertical/industry, clientele,
headcount, location, product line. And, on the subject of diversity and world peace, here’s a universal truth: people who emphasise their support of these things have a problem they are trying to compensate for. Those who truly
support equality (which leads to diversity) do not feel the need to explicitly state it. It’s like recounting sexual
endeavours often and in great detail without the slightest prompting. It’s a tell.</li>
</ul>

<p>To recap, a job description must:</p>

<ul>
  <li>Be written by the people who do the actual work (to put bluntly - not professional recruiters and certainly NOT HR)</li>
  <li>Be limited to: compensation and location, technical outline and the human characteristics and propensities (not superlatives and meaningless accolades!) of those involved as well as those expected from the successful candidate</li>
</ul>

<p><strong>Important note</strong>: the order in which these elements are written also matters. I chose to list compensation and
location first, not because I believe it’s necessarily the most crucial consideration but because it’s the shortest, while still
being a major factor. To illustrate the importance of accuracy here, lying about this aspect is akin to using a picture
of Brad Pitt in your dating profile (assuming you do <strong>not</strong> look like that, of course, hence the “lying” bit), you’re just wasting everyone’s time and, as with dating, the candidate will not fall in love with you and forget all about this deception because of your winning personality.</p>

<p>As the overall problem is complex, I think it best to outline it in easily digestible instalments and this seems like a
good stopping point.</p>

<p>In the <a href="https://packman.io/2025/02/05/Recruitement-overhaul-required-1.html">next instalment</a>, I will discuss the current approach towards resumes and why it so painfully fails to serve the end goal.
Some salient points for reflection:</p>

<ul>
  <li>While the means of initial filtering drastically changed (from a manual review by a competent human to automatic
parsing by software), the format has not changed at all and, worst still, there’s no universally adhered standard to
use for the parsing/data extraction process</li>
  <li>With the advent of a multitude of programming languages, each with loads of popular frameworks and constructs, technology has become both far more complex and complicated and the filtering algorithms are not advancing accordingly</li>
  <li>Emphasis on the wrong, least significant aspects during the initial candidate evaluation (hint: self-praise yet
again, with too little supporting evidence)</li>
</ul>

<p>As noted in my opening statement, I welcome comments and discourse and have started formulating a plan to address this issue. I firmly believe it’s more than feasible and, from a technical standpoint (the human aspects are a different story), not overly complex. If you care, please take part and voice your opinions.</p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[The recruitment industry is due for a massive overhaul.]]></summary></entry><entry><title type="html">super-zaje</title><link href="https://packman.io/2023/08/21/test.html" rel="alternate" type="text/html" title="super-zaje" /><published>2023-08-21T22:57:36+00:00</published><updated>2023-08-21T22:57:36+00:00</updated><id>https://packman.io/2023/08/21/test</id><content type="html" xml:base="https://packman.io/2023/08/21/test.html"><![CDATA[<p>I recently shared the release of <a href="https://github.com/jessp01/zaje">zaje</a> - a syntax highlighter that aims to cover all your shell colouring needs.</p>

<p>I’m pleased to announce the release of <code class="language-plaintext highlighter-rouge">super-zaje</code> which can do everything <code class="language-plaintext highlighter-rouge">zaje</code> can but introduces one very useful enhancement: thanks to the magic of <code class="language-plaintext highlighter-rouge">tesseract</code> and <a href="https://github.com/otiai10/gosseract">gosserac</a>, <code class="language-plaintext highlighter-rouge">super-zaje</code> is able to extract text right off an image (both HTTP(s) URLs and local files are supported).</p>

<p>The below video demonstrates its usefulness with a few sample LinkedIn posts that shared code as screenshots (a common case, alas, due to LI’s inexplicable refusal to support <code class="language-plaintext highlighter-rouge">markdown</code>)</p>

<div id="_assets_super-zaje.mp4"></div>
<script src="https://cdn.rawgit.com/CookPete/react-player/master/dist/ReactPlayer.standalone.js"></script>

<script>
  window["_assets_super-zaje.mp4"] = document.getElementById("_assets_super-zaje.mp4")
  url = 'https://packman.io/assets/super-zaje.mp4'
  renderReactPlayer(window["_assets_super-zaje.mp4"], { url, playing: true, controls: true, width: '100%', height: '100%' })
</script>

<p>For installation instructions, see <a href="https://github.com/jessp01/zaje#installing-super-zaje">README</a></p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[I recently shared the release of zaje - a syntax highlighter that aims to cover all your shell colouring needs.]]></summary></entry><entry><title type="html">shellcheck and ALE demo</title><link href="https://packman.io/2023/07/09/shellcheck-ale.html" rel="alternate" type="text/html" title="shellcheck and ALE demo" /><published>2023-07-09T22:57:36+00:00</published><updated>2023-07-09T22:57:36+00:00</updated><id>https://packman.io/2023/07/09/shellcheck-ale</id><content type="html" xml:base="https://packman.io/2023/07/09/shellcheck-ale.html"><![CDATA[<p>Use <a href="https://www.shellcheck.net">shellcheck</a> and <a href="https://github.com/dense-analysis/ale">ALE</a> to lint your shell scripts on the fly in VIM.</p>

<p>Thanks to <a href="github.com/asciinema">ASCIInema</a>, you can copy everything displayed in the shell right off the player. Simply pause, mark and copy as you normally would. If you prefer, you can also download the file and play it locally in your own shell with <code class="language-plaintext highlighter-rouge">asciinema</code>.</p>

<p>Enjoy:)</p>

<script src="https://asciinema.org/a/595695.js" id="asciicast-595695" async="async"></script>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[Use shellcheck and ALE to lint your shell scripts on the fly in VIM.]]></summary></entry><entry><title type="html">With regards to Redhat’s recent decision…</title><link href="https://packman.io/2023/07/01/With-regards-to-Redhats-recent-decision.html" rel="alternate" type="text/html" title="With regards to Redhat’s recent decision…" /><published>2023-07-01T19:03:34+00:00</published><updated>2023-07-01T19:03:34+00:00</updated><id>https://packman.io/2023/07/01/With-regards-to-Redhats-recent-decision</id><content type="html" xml:base="https://packman.io/2023/07/01/With-regards-to-Redhats-recent-decision.html"><![CDATA[<p>In case you’ve not been following, the decision in question is to <a href="https://www.phoronix.com/news/Red-Hat-CentOS-Stream-Sources">limit access to the Red Hat Enterprise Linux source code</a>.</p>

<p>This caused a mini storm; with this article, I aim to:</p>

<ul>
  <li>Succinctly explain this move and its implications (hint — not as dramatic as some people make them seem)</li>
  <li>Offer my opinion as to why RH decided on this approach</li>
</ul>

<h3 id="the-actualmove"><strong>The actual move</strong></h3>

<p>Below is a quote from <a href="https://www.redhat.com/en/blog/furthering-evolution-centos-stream">RH’s official press release</a>:</p>

<blockquote>
  <p><em>We are continuing our investment in and increasing our commitment to CentOS Stream.</em> <strong><em>CentOS Stream will now be the sole repository for public RHEL-related source code releases.</em></strong> <em>For Red Hat customers and partners, source code will remain available via the Red Hat Customer Portal.</em></p>
</blockquote>

<p>Okay, two questions may come to the minds of those reading this:</p>

<ul>
  <li>What exactly is this <strong><em>CentOS Stream?</em></strong></li>
  <li>What’s the source code in question?</li>
</ul>

<p>Let’s start with the first question; from <a href="https://www.centos.org/centos-stream">https://www.centos.org/centos-stream</a>:</p>

<blockquote>
  <p>Continuously delivered distro that tracks just ahead of Red Hat Enterprise Linux (RHEL) development, positioned as a midstream between Fedora Linux and RHEL.</p>
</blockquote>

<p>This may confuse some people because CentOS was, to use a kind term, repurposed a while back and those who have not followed that change will find this definition doesn’t match their understanding of what it is. Basically, to rephrase the above: <strong>CentOS Stream is RHEL’s upstream.</strong></p>

<p>Right. Now what sources are we talking about? After all, RHEL is a Linux distribution (and one of many, at that); it does not own the kernel (which is provided under the terms of the GNU General Public <em>License</em> version 2) so, when the word “source” is used in this context, what does it refer to?</p>

<p>I imagine anyone who is interested in this topic has at least some vague idea as to what a Linux distribution means but I’ll provide my own little blurb here. <strong>Before I do that, however, I should note that there are different kinds of distributions; Redhat is a binary software distribution, whereas Gentoo (to name the leader of its kind) is based on the idea that you compile sources on your own machine using its</strong> <a href="https://wiki.gentoo.org/wiki/Portage">Portage package management system</a>.</p>

<blockquote>
  <p>A “binary” Linux distribution takes the kernel sources, as well as those of many many other projects/components (GCC, BASH, libpng, Golang, Rust, PHP, X-server. GNOME, KDE, Xbill, etc) and packages them in a way that’s easy to deploy and upgrade.</p>
</blockquote>

<p>In the case of RH, the packaging format is RPM (Redhat Package Manager). RPM, while a Redhat invention, is also FOSS and is used and contributed to by many other Linux distributions, as well as independent developers. Another example for a commonly used FOSS project conceived at Redhat is logrotate (there are many others).</p>

<p>Okay, so Redhat (as well as many other distros) builds and packages a multitude of FOSS components (some of which it also developed and maintains) so that we won’t have to do it ourselves. As you can imagine, this takes a lot of work (there are many packages!) and, as they’re doing it, they encounter bugs, to which they apply patches (before compiling). They then contribute these patches upstream (when appropriate — sometimes, the patch is only needed in order to package the source for a Redhat distribution).</p>

<p>So, “source” in this context mainly pertains to the RPM specs used to generate the packages and the patches applied during the build process. RPM Source packages are also referred to as SRPMs.</p>

<p><strong>Now that we understand what CentOS Stream and “sources” mean in this context, who will be affected by this decision?</strong></p>

<p>As emphasised previously, there are many binary Linux distributions. One, very broad, way of segmenting these would be based on the packaging format (and accompanying tool-chains) they use; with the two most common ones being: deb and RPM.</p>

<p><em>Side note: deb is my format of choice, conceived by Debian, my favourite distro. One example of another distro that uses this format is Ubuntu. I could (and perhaps should) write a whole article about the relationship between Debian and Ubuntu and another that compares between these two packaging formats but neither pertain to this RHEL decision so, I’ll leave it at that for now. I have started writing a series of articles about software packaging; if you’re interested in this topic, you can find the first instalment</em> <a href="https://medium.com/@jesse_62134/docker-is-not-a-packaging-tool-e494d9570e01"><em>here</em></a><em>.</em></p>

<p>As noted above, RPM is Redhat’s packaging format of choice and there are many different distros that use it as well. Some of these distros consider being binary compatible with RHEL releases their main selling point. I use the term “selling” quite loosely here, as in many cases, no money is exchanging hands.</p>

<p>To better understand this rather intricate ecosystem, let’s consider how Redhat generates (large portions of) its revenue. There are two main streams:</p>

<ul>
  <li>Professional Services</li>
  <li>Technical Support</li>
</ul>

<p>To demonstrate the value of these two propositions, let’s consider a person I know intimately: myself(!).</p>

<p>As you already know if you got this far, I am a Debian user. I’ve used it exclusively on all my personal machines and when the choice was mine to make, on all servers under my control as well.</p>

<p>Again, I could write volumes about why I love Debian as much as I do but I (and Debian) am not the topic of this particular article so I’ll be brief and say that the main factors are:</p>

<ul>
  <li>It’s a community distro that matches my ideology</li>
  <li>Its release cycle and repo segmentation make sense to me</li>
  <li>I find deb (and the accompanying tool-chain) superior to that of RPM (and its tool-chain)</li>
  <li>The package quality is extraordinary; regardless of whether you compare it with commercial distros or fellow community ones</li>
</ul>

<p>Having said that, would I choose Debian if I were the CTO of a vast financial consortium with thousands of technical staff? No, I’d actually choose Redhat.</p>

<p>Why? Because while I, as a one man band, can easily maintain a 100 Debian servers of different configurations and purposes (and even more than that by putting in place automated procedures), this imaginary CTO version of me (to be clear: I’ve no aspiration to become that person) cannot. He also cannot personally interview his 1000+ employees to ensure that only like minded people that can are on the payroll. You see, Debian has many hard working, bright volunteers and they produce excellent packages and, if you know how to investigate, solve and report issues, you will also get superb support from the community but, if you don’t, you’re better off paying for a Redhat subscription. In return for your subscription fees, Redhat’s support will squeeze the information out of you/your employees like one squeezes tooth paste out of the tube. They will not send you off with a friendly RTFM (see <a href="https://medium.com/@jesse_62134/from-rtfm-to-participants-awards-f956308ebe97">this article</a> for my views on how important the RTFM notion is) and you could also report back to the board and say: “It’s being looked at by RH” and be sure that no one will fault you for anything. Google “No one ever got fired for purchasing IBM” for more on the latter point.</p>

<p>Right, so, that’s why, in my opinion, people opt for RHEL.</p>

<p>Now, who is affected by this decision? The RHEL clones.</p>

<p>At this point, you may be thinking: “Okay, Jesse, got it but, if the only reason to pay for RHEL is so you could benefit from professional services and tech support (and arse coverage), why would you opt for one of its binary compatible clones instead?”</p>

<p>Once more, <strong>in my view,</strong> two use cases:</p>

<ul>
  <li>As a contractor working with a company that uses RHEL, I want to work on the closest thing to it without paying for a subscription</li>
  <li>As the aforementioned CTO, I want my tech chaps to have the closest thing to it on their dev machines without paying for a subscription (on Prod, I’ll pay, to get the benefits we already covered)</li>
</ul>

<p>It is well worth noting that this move by Redhat is unlikely to eradicate these so-called clones, it will merely make life a bit more difficult for them.</p>

<p>From <a href="https://www.phoronix.com/news/Rocky-Linux-RHEL-Source-Access">https://www.phoronix.com/news/Rocky-Linux-RHEL-Source-Access</a>:</p>

<blockquote>
  <p>“These methods are possible because of the power of GPL. No one can prevent redistribution of GPL software. To reiterate, both of these methods enable us to legitimately obtain RHEL binaries and SRPMs without compromising our commitment to open source software or agreeing to TOS or EULA limitations that impede our rights. Our legal advisors have reassured us that we have the right to obtain the source to any binaries we receive, ensuring that we can continue advancing Rocky Linux in line with our original intentions.”</p>
</blockquote>

<p>Now we arrive at the other question…</p>

<h3 id="why-did-the-redhat-take-thisstep">Why did the Redhat take this step?</h3>

<p>Redhat does not care about the community clones. It knows that people using these will not buy a Redhat subscription if they were to disappear. They’ll go with Debian (or Ubuntu or one of the numerous RPM based distros available).</p>

<p><strong>Who do they care about?</strong> <a href="https://en.wikipedia.org/wiki/Oracle_Linux"><strong>Oracle Enterprise Linux</strong></a> <strong>.</strong></p>

<p>From <a href="https://en.wikipedia.org/wiki/Oracle_Linux">https://en.wikipedia.org/wiki/Oracle_Linux</a>:</p>

<blockquote>
  <p>Oracle Linux (abbreviated OL, formerly known as Oracle Enterprise Linux or OEL) is a Linux distribution packaged and freely distributed by Oracle, available partially under the GNU General Public License since late 2006.[4] It is compiled from Red Hat Enterprise Linux (RHEL) source code, replacing Red Hat branding with Oracle’s.</p>
</blockquote>

<p><strong>Do I support Redhat’s decision? No</strong> , I don’t. When you base your business model on FOSS (see <a href="https://medium.com/@jesse_62134/dont-forget-to-floss-25f3faa3856e">https://medium.com/@jesse_62134/dont-forget-to-floss-25f3faa3856 for why I think it’s the best choice)</a>, you enjoy many benefits (too many to list in this article) but, you also need to prepare yourself for things like Oracle Enterprise Linux.</p>

<p>Can I understand how Redhat would be irked by Oracle Enterprise Linux? Absolutely but again, it’s part of the deal.</p>

<p><em>Psst..</em></p>

<p><em>Liked this post and have a role I could be a good fit for? I’m open to suggestions. See</em> <a href="https://packman.io/#contact"><em>https://packman.io/#contact</em></a> <em>for ways to contact me.</em></p>

<p><em>Cheers,</em></p>

<p><img src="https://medium.com/_/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=270b2cf80706" alt="" /></p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[In case you’ve not been following, the decision in question is to limit access to the Red Hat Enterprise Linux source code.]]></summary></entry><entry><title type="html">Docker is not a packaging tool — part 2</title><link href="https://packman.io/2023/05/12/Docker-is-not-a-packaging-toolpart-2.html" rel="alternate" type="text/html" title="Docker is not a packaging tool — part 2" /><published>2023-05-12T18:22:07+00:00</published><updated>2023-05-12T18:22:07+00:00</updated><id>https://packman.io/2023/05/12/Docker-is-not-a-packaging-toolpart-2</id><content type="html" xml:base="https://packman.io/2023/05/12/Docker-is-not-a-packaging-toolpart-2.html"><![CDATA[<h3 id="docker-is-not-a-packaging-toolpart2">Docker is not a packaging tool — part 2</h3>

<p>In the <a href="https://packman.io/2023/04/18/Docker-is-not-a-packaging-toolpart-1.html">last segment</a> of this series, we briefly discussed the importance of starting the build process from a clean ENV. We left off promising to say a word about pkg-config and move on to how proper packaging helps matters, how utilising chroots made things far easier way back when, why Docker was the next evolutionary step and, lastly - why, while grand, it is not an all encompassing, magic-holly-grail solution to all your build and deployment headaches. So, without further ado…</p>

<h3 id="pkg-config">pkg-config</h3>

<p>As I often recommend doing, let’s start from looking at the man page:</p>

<pre><code class="language-man">pkg-config(1) General Commands Manual pkg-config(1)
NAME
pkg-config - Return metainformation about installed libraries
DESCRIPTION
The pkg-config program is used to retrieve information about installed libraries
It is typically used to compile and link against one or more libraries.
Here is a typical usage scenario in a Makefile:
cc program.c `pkg-config --cflags --libs gnomeui`
</code></pre>

<p>Okay, that’s a clear and accurate description. The key bit here is this:</p>

<blockquote>
  <p><em>It is typically used to compile and link against one or more libraries.</em></p>
</blockquote>

<p>So, pkg-config can help us ascertain that we have the needed deps to build our software. <br />
 Let’s look at a package (I chose libgif7, completely at random - I literally typed dpkg -L libg, hit tab-y and chose one) on my Debian machine to understand what pkg-config actually does for us.<br />
 Here we can begin to see the advantage of packaging formats like deb and RPM, to wit: they store the installed packages in a local DB and provide tools to look up metadata about them. In the case of APT/deb/dpkg, if we want to see what files a given package includes, we can run:</p>

<blockquote>
  <p><em>dpkg -L</em></p>
</blockquote>

<p>Sample output:</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>dpkg <span class="nt">-L</span> libgif7
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libgif.so.7.2.0
/usr/share
/usr/share/doc
/usr/share/doc/libgif7
/usr/share/doc/libgif7/NEWS.gz
/usr/share/doc/libgif7/TODO
/usr/share/doc/libgif7/changelog.Debian.gz
/usr/share/doc/libgif7/changelog.gz
/usr/share/doc/libgif7/copyright
/usr/lib/x86_64-linux-gnu/libgif.so.7
</code></pre></div></div>

<p>Very helpful, indeed:)</p>

<blockquote>
  <p><em>NOTE: in this post, I’ll be mentioning several deb/dpkg/APT commands (I’m a proud Debian GNU/Linux user). Of course, not all Linux distros and certainly not all UNIX flavours use this toolchain/stack/your term here. Since RPM/YUM/DNF is a very common stack, I’ll make a reasonable effort to provide the counterpart commands for these as well. In this case, the counterpart of</em> <em>dpkg -L is</em> <em>rpm -ql.</em></p>
</blockquote>

<p>You’ll notice that the above output includes no reference to any pkg-config files at all. To understand why, allow me to provide some background as to how Linux distros based on pre-built/compiled packages segment things and why.</p>

<p>Let’s start with the <strong>why</strong> : it is commonly agreed (though not always practised) that a file system should not have files (of any kind: config, binaries, scripts, what have you), the system does not need in order to function. In the early, more naive days, this was mainly a question of disk space, which was very limited. With today’s resources, this point is somewhat less important (though not always — consider embedded devices) but, with the advancement and general availability of tech and computer resources, another problem has emerged; to wit: SECURITY. Put simply:</p>

<blockquote>
  <p>The more unneeded rubbish you have on your FS, the more vulnerable you are.</p>
</blockquote>

<p>Another difficulty that has become more pronounced is that of managing dependencies and of course, the fewer packages you have installed, the easier it is to manage them.</p>

<p>Now, let’s go into the <strong>how.</strong> Again, I’ll be covering how it’s done in deb based distros, as well as RPM based ones. The principle in both is the same:</p>

<blockquote>
  <p>Packages are built from a spec file (or, in the case of deb — multiple spec files, each serving a different purpose). <br />
The spec defines the package deps (separated into those needed to build the package and those needed to run it), as well as specifies the files to be included in the package (binaries, configuration files, documentation, etc) and their location (this is fixed, you cannot choose where to install files when deploying deb and RPM packages). It also includes some metadata: package name, description, source and so on. Some of the metadata is mandatory to specify (name for instance), other bits are optional (for example, not all packages declare the source/repo the package came from, which is a shame, because it’s useful data).</p>
</blockquote>

<p>Now, to bring us back to the question of why the libgif7 package includes no pkg-config files: one spec can declare multiple packages and, in the case of libraries like libgif typically will.</p>

<p>To better explain this, let’s obtain the spec files for this package from my Debian 11 repo. We can do that with:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ apt-get source libgif7
</code></pre></div></div>

<p>If you’ve run the above command, you’ll find that it has placed a directory called <code class="language-plaintext highlighter-rouge">giflib-$VERSION</code> in your CWD.</p>

<p>Inside it, you’ll find many different files, including the source for libgif of the version in question and a directory called debian where the spec files reside. Here’s what’s in there in my case:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt">-rw-r--r--</span> 1 jesse jesse 14506 Dec 20 2020 changelog
<span class="nt">-rw-r--r--</span> 1 jesse jesse 1328 Dec 20 2020 control
<span class="nt">-rw-r--r--</span> 1 jesse jesse 2371 Dec 20 2020 copyright
<span class="nt">-rw-r--r--</span> 1 jesse jesse 10 Dec 20 2020 giflib-dbg.docs
<span class="nt">-rw-r--r--</span> 1 jesse jesse 303 Dec 20 2020 giflib-tools.doc-base
<span class="nt">-rw-r--r--</span> 1 jesse jesse 10 Dec 20 2020 giflib-tools.docs
<span class="nt">-rw-r--r--</span> 1 jesse jesse 9 Dec 20 2020 giflib-tools.install
<span class="nt">-rw-r--r--</span> 1 jesse jesse 24 Dec 20 2020 giflib-tools.manpages
<span class="nt">-rw-r--r--</span> 1 jesse jesse 10 Dec 20 2020 libgif7.docs
<span class="nt">-rw-r--r--</span> 1 jesse jesse 18 Dec 20 2020 libgif7.install
<span class="nt">-rw-r--r--</span> 1 jesse jesse 1701 Dec 20 2020 libgif7.symbols
<span class="nt">-rw-r--r--</span> 1 jesse jesse 10 Dec 20 2020 libgif-dev.docs
<span class="nt">-rw-r--r--</span> 1 jesse jesse 46 Dec 20 2020 libgif-dev.install
drwxr-xr-x 2 jesse jesse 4096 Dec 20 2020 patches
<span class="nt">-rwxr-xr-x</span> 1 jesse jesse 887 Dec 20 2020 rules
drwxr-xr-x 2 jesse jesse 4096 Dec 20 2020 <span class="nb">source
</span>drwxr-xr-x 2 jesse jesse 4096 Dec 20 2020 upstream
<span class="nt">-rw-r--r--</span> 1 jesse jesse 211 Dec 20 2020 watch
</code></pre></div></div>

<p>Let’s focus our attention on some of these:</p>

<ul>
  <li>control: defines the metadata for the packages to be produced (name, description, section and, very importantly: the build and runtime dependencies)</li>
  <li>rules: contains the build instructions (this will typically be processed by <code class="language-plaintext highlighter-rouge">make</code> but it’s not a requirement — you can use any tool you want, just remember to declare it as a build dep)</li>
  <li>patches: in some cases, the package maintainer will apply patches to the upstream source (what is often referred to as pristine sources in RPM terminology). These will be placed in this directory and processed when rules is executed</li>
  <li>changelog: this is a very important file; amongst other things, it allows us to easily tell what an upgrade includes (security fixes, new features, bug fixes and also breaking changes)</li>
</ul>

<p>Right, this is all very interesting but again: why doesn’t the libgif7 package include any pkg-config files? <strong>Because they are part of the <code class="language-plaintext highlighter-rouge">libgif-dev</code>.</strong></p>

<blockquote>
  <p><em>NOTE: in RPM/YUM/DNF based systems, the package spec consists of a single file (package-name.spec) rather than multiple files as described above. The general concepts are very similar, however, and the spec file is divided into sections, each serving as a counterpart to the above debian spec files. Patches will typically reside under ~/rpmbuild/SOURCES. The naming convention for development packages is package-name-devel.</em></p>
</blockquote>

<p>As we said before, one spec can declare several different packages and this is the case for many packages, especially those providing libraries. Let us take a closer look at the control file; for the benefit of shortening this article, we’ll use grep to extract the different packages this file declares:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ grep "Package:\|Description" debian/control
Package: giflib-tools
Description: library for GIF images (utilities)
Package: libgif7
Description: library for GIF images (library)
Package: libgif-dev
Description: library for GIF images (development)
</code></pre></div></div>

<p>Okay, so, we can see that 3 packages are declared in this spec. We’ve already seen what files libgif7 contains, let’s now look at libgif-dev:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ dpkg -L libgif-dev
/.
/usr
/usr/include
/usr/include/gif_lib.h
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libgif.a
/usr/lib/x86_64-linux-gnu/pkgconfig
/usr/lib/x86_64-linux-gnu/pkgconfig/libgif7.pc
/usr/share
/usr/share/doc
/usr/share/doc/libgif-dev
/usr/share/doc/libgif-dev/NEWS.gz
/usr/share/doc/libgif-dev/TODO
/usr/share/doc/libgif-dev/changelog.Debian.gz
/usr/share/doc/libgif-dev/changelog.gz
/usr/share/doc/libgif-dev/copyright
/usr/lib/x86_64-linux-gnu/libgif.so
/usr/lib/x86_64-linux-gnu/pkgconfig/libgif.pc
</code></pre></div></div>

<p>As you can see, the dev package includes a directory called pkgconfig which has a single file called libgif7.pc and another file called libgif.pc which is a symlink pointing to the former.</p>

<blockquote>
  <p>To reiterate: the purpose of this package segmentation convention is to avoid the unnecessary deployment of files on our filesystem. dev packages include files that are only needed for developing with (or building against) a given package (headers, archive files, pkg-config files, etc).</p>
</blockquote>

<p>Let’s have a look at the contents of <code class="language-plaintext highlighter-rouge">pkgconfig/libgif7.pc</code>:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>prefix=/usr
exec_prefix=${prefix}
libdir=${prefix}/lib/x86_64-linux-gnu
includedir=${prefix}/include

Name: libgif
Description: Loads and saves GIF files
Version: 5.2.1
Cflags: -I${includedir}
Libs: -L${libdir} -lgif
</code></pre></div></div>

<p>As you can see, it provides information we’ll need when building against this library:</p>

<ul>
  <li>Its prefix</li>
  <li>Where the library files (shared objects, archive files) reside</li>
  <li>Its version (remember our description of the dependency hell?)</li>
  <li>Cflags in this case only setting the include path (-I) but potentially, it could specify other flags to be used by the compiler</li>
  <li>Libs points the linker to our $libdir (-L) and specifies that we should link against libgif (-lgif)</li>
</ul>

<p>Let us look at a more elaborate example: the pkg-config for gtk4:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cat /usr/lib/x86_64-linux-gnu/pkgconfig/gtk4.pc
prefix=/usr
includedir=${prefix}/include
libdir=${prefix}/lib/x86_64-linux-gnu

targets=broadway wayland x11
gtk_binary_version=4.0.0
gtk_host=x86_64-linux

Name: GTK
Description: GTK Graphical UI Library
Version: 4.8.2
Requires: pango &gt;= 1.50.0, pangocairo &gt;= 1.50.0, gdk-pixbuf-2.0 &gt;= 2.30.0, cairo &gt;= 1.14.0, cairo-gobject &gt;= 1.14.0, graphene-gobject-1.0 &gt;= 1.9.1, gio-2.0 &gt;= 2.66.0
Libs: -L${libdir} -lgtk-4
Cflags: -I${includedir}/gtk-4.0
</code></pre></div></div>

<p>This one includes another field called Requires which specifies additional deps (similar to what the debian/control file does).</p>

<p>So how is this metadata used? Generally speaking, the steps when building C/C++ code are:</p>

<ul>
  <li>Generate and run the configure script to specify the features you want to build the project with (i.e: libgif support) and ensure all project deps can be found</li>
  <li>Build the code using a compiler (can be done by invoking <code class="language-plaintext highlighter-rouge">make</code> with a Makefile, which will in turn, invoke other tools including the compiler, but many other build frameworks/tools may be used, for example: CMake.</li>
  <li>Link against needed deps using ld</li>
  <li>Optionally, install the resulting files (binaries, configs, metadata, man pages, etc) onto the target paths</li>
</ul>

<p>pkg-config will typically be involved in the configuration stage. For example, a configure script for a project that requires GTK4 may include this command:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pkg-config --cflags --libs gtk4
</code></pre></div></div>

<p>Which will return an output similar to this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>-mfpmath=sse -msse -msse2 -pthread -I/usr/local/include/freetype2 -I/usr/include/gtk-4.0 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/fribidi -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/x86_64-linux-gnu -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/harfbuzz -I/usr/include/libpng16 -I/usr/include/graphene-1.0 -I/usr/lib/x86_64-linux-gnu/graphene-1.0/include -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -lgtk-4 -lpangocairo-1.0 -lpango-1.0 -lharfbuzz -lgdk_pixbuf-2.0 -lcairo-gobject -lcairo -lgraphene-1.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
</code></pre></div></div>

<p>At this point, you may be wondering why we need pkg-config at all if, as we’ve seen, deb/RPM based distros have tools that can provide the same information. The answer is rather simple but instead of phrasing my own version of it, let’s have a look at what Wikipedia has to say; from <a href="https://en.wikipedia.org/wiki/Pkg-config">https://en.wikipedia.org/wiki/Pkg-config</a>:</p>

<blockquote>
  <p><strong>pkg-config</strong> is defines and supports a unified interface for querying installed <a href="https://en.wikipedia.org/wiki/Library_(computer_science)">libraries</a> for the purpose of <a href="https://en.wikipedia.org/wiki/Compiler">compiling</a> software that depends on them. It allows programmers and installation scripts to work without explicit knowledge of detailed library path information. pkg-config was originally designed for <a href="https://en.wikipedia.org/wiki/Linux">Linux</a>, but it is now also available for <a href="https://en.wikipedia.org/wiki/Berkeley_Software_Distribution">BSD</a>, <a href="https://en.wikipedia.org/wiki/Microsoft_Windows">Microsoft Windows</a>, <a href="https://en.wikipedia.org/wiki/MacOS">macOS</a>, and <a href="https://en.wikipedia.org/wiki/Solaris_(operating_system)">Solaris</a>.</p>
</blockquote>

<p>The imperative words here are: <strong>unified interface.</strong></p>

<p>There are many Linux packaging formats but, while you can install the deb toolchain on an RPM based distro and vice versa, as well as on some other Unices, doing so can cause confusion (was this package installed via RPM, dpkg or something else? where does its metadata reside and what tool should I use to fetch it?) and when writing your packaging/deployment scripts, you cannot really rely on either of these toolchains being present.</p>

<p>You could, of course, cover both cases with conditional statements but that would make for a very long, error prone and hard to maintain code. Further, these tools will only work with packages deployed through them. In other words, dpkg will not return data for files/packages that were installed by manually invoking make install or cmake –install or rpm -i. <br />
Moreover, while distros like Debian and RedHat (to name only two) do go to great lengths to package a multitude of popular (and some less popular) packages, no single distro can cover everything under the sun and chances are you’ll still have to build SOME of your needed dependencies yourself rather than rely on pre-built packages from the official distro repos.</p>

<p>pkg-config is useful because it is a unified interface. Declare that your build depends on it and you’re covered:) As far as the building bit is concerned, it doesn’t cover SUCCESSFULLY INSTALLING pre-built binaries, which is why packaging formats like deb and RPM are so important.</p>

<p>At the point, it is worth mentioning that there are alternatives to pkg-config. One such prominent alternative is <a href="https://en.wikipedia.org/wiki/GNU_Libtool">GNU Libtool</a>.</p>

<p>Once more, rather than needlessly work on my own explanation, allow me to refer you to Wikipedia for a quick comparison: <a href="https://en.wikipedia.org/wiki/Pkg-config#Comparison_with_libtool">https://en.wikipedia.org/wiki/Pkg-config#Comparison_with_libtool</a></p>

<p>Right, let’s recap:</p>

<p>In this chapter, we’ve discussed the purpose of pkg-config and demonstrated some obvious advantages of the packaging approach deb and RPM share, as well as explained the reasoning behind package segmentation.</p>

<p>Join me for the next instalment of this series where I’ll return to the use of chroots, the added value provided by Docker and how we can combine proper packaging and Docker in our pursuit of the perfect build, packaging and deployment process:)</p>

<p>May the source be with you,</p>

<p><img src="https://medium.com/_/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=75a1a10c154d" alt="" /></p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[Docker is not a packaging tool — part 2]]></summary></entry><entry><title type="html">Annoying things LI can easily fix (and should!)</title><link href="https://packman.io/2023/05/07/Annoying-things-LI-can-easily-fix-and-should.html" rel="alternate" type="text/html" title="Annoying things LI can easily fix (and should!)" /><published>2023-05-07T00:01:03+00:00</published><updated>2023-05-07T00:01:03+00:00</updated><id>https://packman.io/2023/05/07/Annoying-things-LI-can-easily-fix-and-should</id><content type="html" xml:base="https://packman.io/2023/05/07/Annoying-things-LI-can-easily-fix-and-should.html"><![CDATA[<ul>
  <li>No Markdown support. Even if you somehow fail to consider the large number of LI users who are in the tech business (developers, devOps, QA, what have you), MD can be very useful to non-tech people as well. The nice thing about MD is you can successfully use it without even knowing it. For example, if you prefix your sentence with ‘-’ or ‘*’ (which people do naturally), you get a bullet. That’s a lot better than learning how to insert Unicode chars (mostly people don’t even know what Unicode is). For tech people (which you SHOULD care about), MD is extremely important because it provides an easy way to highlight syntax (in combination with simple CSS) and, as importantly — to format the code (and no, I’m not going to use spaces in the HTML textbox like a crazy person to format my copied code). <br />
I cringe whenever I am forced to share a screenshot of my code open in VIM for the sole purpose of making it more readable (due to proper formatting and syntax highlighting). I recently shared a bunch of such screenshots in a back and forth commenting session with someone. In total, the size of these images was 300K, if I simply posted text, it would have been under 1K. You may think 300K is marginal (you should actually double that as he’s also shared his) but, think about how many people do the same on daily basis — it accumulates! Furthermore, while this makes code easier to read, it also means it cannot be copied. Moronic. Support Markdown, seriously, everyone else does.</li>
  <li>Char limitation when commenting. You’re not saving space, MS. You’re actually wasting more DB space with this because what people do when confronted with this limitation is concatenate their reply across multiple comments. This is not only a massive headache for your users (reads and commenters, alike), it makes your DB fatter and slower because you end up with more records in the table that stores these comments. I’m not saying it should be unlimited, but seriously, DOUBLE it. It will cover most cases and save you some space and us some inconvenience.</li>
  <li>Different behaviour between the Android app and website when looking at post reactions. When I view reactions to my posts in the app, I have a little icon next to each user that indicates whether or not I can send them a connection request or only follow them or message them. This saves a lot of time because it means I don’t have to visit their profile to find out. Why does your website not expose the same option? In the app, I can also sort based on the comment time (newest first, oldest first) or, the mysterious criterion: “most relevant”. The latter is useless (though I’d like to know the metrics used to score this) but the other two are sometimes useful.</li>
  <li>The next one isn’t a bug on your end but it’s annoying and you can easily solve it. Here’s the scenario: I scroll through my feed and encounter a post that includes a link to an external resource (in other words, a URL). I am interested so, I click and Android opens my default browser with said URL. I read as I please but then, when I close the browser, I am returned to my feed and… it gets updated. I’ve now lost the LI post that led me to that link. Too bad. I liked what I read and I wanted to show my appreciation to the poster; only I can’t find it.<br />
How to solve this? Store a list of N last viewed posts and expose these under “Activity”. N does not have to be a huge number. For the scenario I’m describing, 1 would actually do but there are other scenarios where this can be of value so, shall we say 5 last viewed posts?<br />
And yes, I know I can save a post for later but, at the point of being navigated away from the LI app and onto the browser, I don’t know whether it’s valuable or not, I’m trying to find out.</li>
</ul>

<p>There are other things I could point out but while extremely annoying, they are either too much of an edge case to interest you or things I know you will not change because you think there’s a chance you’ll wear users down with. An example of the latter is your constant badgering about a free month of “Premium”. Just so you know, I will never pay for a premium subscription and I will also not opt in for a free month of it because you want my credit card in advance and, we all know how this works out… I should also note that when I did click on this call to action (just to see whether I’d have to provide a payment method in advance), I was confronted with a bloody questionnaire about my goals. Seriously? are you trying to get my money or give me a quarterly review?!</p>

<p><img src="https://medium.com/_/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=5203b1b7c204" alt="" /></p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[No Markdown support. Even if you somehow fail to consider the large number of LI users who are in the tech business (developers, devOps, QA, what have you), MD can be very useful to non-tech people as well. The nice thing about MD is you can successfully use it without even knowing it. For example, if you prefix your sentence with ‘-’ or ‘*’ (which people do naturally), you get a bullet. That’s a lot better than learning how to insert Unicode chars (mostly people don’t even know what Unicode is). For tech people (which you SHOULD care about), MD is extremely important because it provides an easy way to highlight syntax (in combination with simple CSS) and, as importantly — to format the code (and no, I’m not going to use spaces in the HTML textbox like a crazy person to format my copied code). I cringe whenever I am forced to share a screenshot of my code open in VIM for the sole purpose of making it more readable (due to proper formatting and syntax highlighting). I recently shared a bunch of such screenshots in a back and forth commenting session with someone. In total, the size of these images was 300K, if I simply posted text, it would have been under 1K. You may think 300K is marginal (you should actually double that as he’s also shared his) but, think about how many people do the same on daily basis — it accumulates! Furthermore, while this makes code easier to read, it also means it cannot be copied. Moronic. Support Markdown, seriously, everyone else does. Char limitation when commenting. You’re not saving space, MS. You’re actually wasting more DB space with this because what people do when confronted with this limitation is concatenate their reply across multiple comments. This is not only a massive headache for your users (reads and commenters, alike), it makes your DB fatter and slower because you end up with more records in the table that stores these comments. I’m not saying it should be unlimited, but seriously, DOUBLE it. It will cover most cases and save you some space and us some inconvenience. Different behaviour between the Android app and website when looking at post reactions. When I view reactions to my posts in the app, I have a little icon next to each user that indicates whether or not I can send them a connection request or only follow them or message them. This saves a lot of time because it means I don’t have to visit their profile to find out. Why does your website not expose the same option? In the app, I can also sort based on the comment time (newest first, oldest first) or, the mysterious criterion: “most relevant”. The latter is useless (though I’d like to know the metrics used to score this) but the other two are sometimes useful. The next one isn’t a bug on your end but it’s annoying and you can easily solve it. Here’s the scenario: I scroll through my feed and encounter a post that includes a link to an external resource (in other words, a URL). I am interested so, I click and Android opens my default browser with said URL. I read as I please but then, when I close the browser, I am returned to my feed and… it gets updated. I’ve now lost the LI post that led me to that link. Too bad. I liked what I read and I wanted to show my appreciation to the poster; only I can’t find it. How to solve this? Store a list of N last viewed posts and expose these under “Activity”. N does not have to be a huge number. For the scenario I’m describing, 1 would actually do but there are other scenarios where this can be of value so, shall we say 5 last viewed posts? And yes, I know I can save a post for later but, at the point of being navigated away from the LI app and onto the browser, I don’t know whether it’s valuable or not, I’m trying to find out.]]></summary></entry><entry><title type="html">From RTFM to participant awards</title><link href="https://packman.io/2023/05/06/From-RTFM-to-participant-awards.html" rel="alternate" type="text/html" title="From RTFM to participant awards" /><published>2023-05-06T18:11:35+00:00</published><updated>2023-05-06T18:11:35+00:00</updated><id>https://packman.io/2023/05/06/From-RTFM-to-participant-awards</id><content type="html" xml:base="https://packman.io/2023/05/06/From-RTFM-to-participant-awards.html"><![CDATA[<p>This is a story of the decline of our standards. I expect the opinions I express here may be controversial but honestly? I don’t mind it. I’m all for discussion and while I’d rather only receive supportive comments (anyone that says they want to be disagreed with is lying), I’m open to opposing opinions as well, so long as we keep it polite and on point.</p>

<p>To those who don’t know the acronym, RTFM stands for Read The Fucking Manual. This used to have been a perfectly reasonable response to give when someone asked a stupid or super trivial question that is covered in the documentation.</p>

<p>I am not a huge fan of expletives and use them very rarely but since RTM is not as effective in driving the point home, I suppose, had I coined this acronym, I’d probably use “bloody” (Americans often think “bloody” is just the UK version of “fucking” but it isn’t so). But I digress…</p>

<p>At a certain point in time, it somehow became impolite to use RTFM and now, it appears to be perceived as plain rude. This is not because of the “fucking” bit at all. If anything, it seems that cursing is becoming more and more acceptable, not less. <br />
So then, what is the reason for this change in norm and, why is it so important?</p>

<p>Well, let’s consider the notion this acronym conveys so succinctly; You’ve asked a stupid/trivial question. This implies that:</p>

<blockquote>
  <p>You have not read the manual <strong>which you are expected to do</strong> , and now, because you’re too lazy to read what I have read to be in a position to answer you (or even written myself for others to read, as the case may be), you’re needlessly wasting my valuable time</p>
</blockquote>

<p>or:</p>

<blockquote>
  <p>You have read the manual and still managed not to know the answer to this basic question so… come on, don’t make me say it.</p>
</blockquote>

<p>At this point, you may be thinking: “<em>Jesse, that’s unfair! Maybe the questioner DID read the manual and it’s simply badly written?</em>”</p>

<p>To this I say: indeed, manuals are written by people and people make mistakes. Thinking that’s something will be obvious to others because it is to YOU is a very common and human mistake to make. HOWEVER, when that’s the case, I’d expect the question to be prefixed with:</p>

<blockquote>
  <p>“I read the manual and in particular THIS section and couldn’t find the answer. I then proceeded to Google my question and looked at multiple results but still couldn’t nail it down. Could you please help me?”</p>
</blockquote>

<p>If you prefixed your question in that manner and still got an RTFM reply then, you’re right, it’s unfair but, that’s hardly ever the case.</p>

<p>Another case I’m fine with and will never respond to with RTFM is when someone who is a friend or at least, a very close peer, prefixes the question with:</p>

<blockquote>
  <p>“I know it’s probably in the manual but, you’re quicker to ask and I’m very stressed at the moment…”</p>
</blockquote>

<p>I’ll absolutely accept that. As humans, our behaviour is subjective and friends are more important than manuals.</p>

<p>These exceptions asides, basically, replying with RTFM berates the questioning party for failing to adhere to fundamental expectations: read manuals to learn what to do and, more implicitly — do not disturb others needlessly.</p>

<p>The reason RTFM is now considered rude is because we no longer expect that of people, which is extremely sad and yes, <strong>important</strong>. The rest of this post is an account of my opinion as to why and how we got to a point where we no longer have basic standards and expectations where it comes to learning.</p>

<h3 id="it-all-starts-with-the-bloody-participant-awards">It all starts with the bloody participant awards</h3>

<p>I remember the first time I heard about participant awards. It was in an article written about a book discussing education and its results. Unfortunately, it was a rather long time ago (over 10 years) and I cannot find it, which is a shame. I have not actually read the book (or I’d certainly remember its title and author name), only the article about it. I tried Googling key phrases from it in an attempt to locate it but couldn’t. If anyone knows the book I’m describing, please comment.</p>

<p>At any rate, for those of you that don’t know, it has become common practice to give children an award, not for being the best at something but merely, for SHOWING UP. Yes, that’s right. A competitive event is held, one child performs better than all the others, others perform well, others still perform poorly and EVERYONE gets an award.</p>

<p>When I was growing up, we did not get awards for showing up and learning that this is now done drove me absolutely bonkers. At that time, what worried me (and the author of this book I cannot find) was that this sends the wrong message to children and will produce adults that cannot handle the real world. What I failed to predict was that, instead, the world will be changed by these very children.</p>

<p>Let’s take a step back here and talk about the generation of my parents (just to frame this — I am 41). My grandfather was my sole male role model growing up. I idolised him, I still do and he absolutely deserves it. As a child. he taught me how to do pretty much everything but computer programming, which I taught myself (I don’t believe he ever touched a computer, frankly but I think, had he been born a bit later, he’d have loved it). He was always kind, patient and accepting. I owe him everything. He never hit me or any of my siblings and I can only remember ONE occasion in which he yelled at us and even then, it was only because he was concerned that we’re disturbing my nan.</p>

<p>Why do I bring this up in this context? Because my mum once told me that, instructed by nan, he used to hit her and her siblings. My first reaction was complete shock and I was even inclined to believe that she’s lying (we’ve a very complicated relationship and I don’t trust her) but upon reflection, I suppose it’s probably true. I’m POSITIVE that he didn’t want to do that but yes, it was a different time and hitting children was considered an acceptable and even educational form of punishment. That proved to be bad practice so, we stopped doing it, which is a good thing!</p>

<p>Every generation thinks they are the best and that the preceding and subsequent ones are rubbish. I am aware of this and am not an exception to the rule. Further, the blessed influence of my granddad aside, I had a most rotten childhood so, I will not bring my memories of how my parents handled me as an example to support any of the observations I make in the following few paragraphs (I do lean into my personal experience towards the end of this post though — it’s unavoidable).</p>

<p>Having said that, one’s parents are not the sole influence on one’s young life and, reflecting on the general messages I have absorbed growing up, it was obvious that physical violence towards children (or anyone for that matter) is discouraged. It was equally clear that in order to get praise and do well, one has to work hard. I can’t recall an incident where the underlying message was that everyone is amazing and it’s enough to just SHOW UP.</p>

<p>I don’t mean to imply, by any means, that the educational methods applied to me and my peers were perfect. Far from it. And I have mates that suffered unjustly because of various syndromes I believe they now call learning disabilities (forgive me if it’s not the correct PC terms at the time of writing this, it’s hard to keep track).</p>

<p>So, that was my world growing up. Imperfect and often unfair towards those who are substantially different (substantially because everyone is different in one way or the other and feels it).</p>

<p>In some ways, one could say, perhaps rightly, that things are better now; at least in terms of accepting diversity of thought and abilities. <br />
For example, I have Asperger’s. I went through my entire childhood and much of my adult life without being diagnosed. I was finally diagnosed at the age of 30, by a psychiatrist whom I came to consult about a, then new to me, syndrome called OCD. Would I have been better off growing up now? In that particular sense, perhaps. It’s possible that a young Jesse growing up today would have been diagnosed as autistic, rather than just labelled as weird and, it would have probably been a very good thing. At least in my case. I do know that this diagnosis, late as it was, has helped me: I can now refer people to a Wikipedia page and get a more favourable treatment when I behave “weirdly”.</p>

<p>But I think we’re now taking it to the other extreme. It seems to me that everyone (parents, teachers, the whole world and his sister) are afraid to give any criticism lest they hurt the child’s fragile soul. And this is what the participant award is all about. We’re so fearful that, if we were to say John or Jill did best at a competition, the other children will get upset and will be scarred for life so instead, we give EVERYONE an award. THIS IS WRONG.</p>

<p>It’s unfair to John and Jill because we’re not acknowledging their hard work and talent and it’s unfair to all the other children because we’re not encouraging them to find something they excel at. Everyone cannot be special at everything. If everyone is special then — NO ONE IS.</p>

<p>People of all ages want to be acknowledged for their special abilities and efforts and they want to be guided as to how to find and capitalise on these.</p>

<p>If you’re told that everyone is equally amazing and special, you’re essentially being told that there’s no reason to work hard. Which brings us back to the RTFM point. People don’t read the manual because, as children, they learned, from these very incidents and messages, that they don’t have to work hard to be successful or appreciated and that they, like everyone else, is amazing.</p>

<p>Let’s focus on this one word: amazing. It seems to be used so.. liberally these days. So much so that, one day, I’ve actually looked its definition up just to establish that I haven’t gotten it wrong all these years! Here it is:</p>

<blockquote>
  <p>Causing great surprise or wonder; astonishing.<br />
Informal: very impressive; excellent.</p>
</blockquote>

<p>Right. So, as I suspected, I did not get it wrong. When I use the word amazing, which I am incredibly discerning about, I mean exactly that: very impressive, excellent. I use it sparingly, because, let’s face it, most things are NOT very impressive or excellent. We have many other words to describe things that are not THAT; for example: ordinary, regular, normal, usual (and dozens if not hundreds of other words, in EVERY language).</p>

<p>To this day, and even after obtaining all the above insights, I am still extremely disappointed, borderline crestfallen even, when, after having “amazing” used in reference to something I have worked hard on and required skill, the same word is used to describe a most minimal effort.</p>

<p>Giving undeserved accolades and awards is hurting everyone, including the very people you’re trying to make feel good by doing so. Let me give you an example:</p>

<p>I like music. Not as much as I like programming but I like it. I started coding when I was 9. Not only did I receive no encouragement from my parents, I was actually discouraged by my mother (who thought and probably still does, that computers are an utter waste of time) and my efforts were further hindered by the fact that my sperm donor (I know him, I just don’t call him father) occupied our only computer through large portions of the day playing video games.</p>

<p>If you’re reading this and thinking that it’s very sad, I agree with you but here’s my actual point: when I was 10, I took some guitar lessons. This, I WAS encouraged to do because my mother liked music (presumably, I don’t recall her ever actually listening or commenting on music but I also don’t recall her saying it’s a waste of time) and my sperm donor was allegedly a rather gifted guitar player (I say allegedly because, as you’ll soon come to realise, I am in no position to judge such abilities). My mum paid for these lessons and I don’t recall having to beg or even vigorously ask for it (programming I learned by myself from reading programmes that came with QBASIC and, when I was 14, after having programmed a bit with QBASIC, I had to BEG her quite a bit until she agreed to pay for a book about C).</p>

<p>Anyway, I was no good. I don’t recall anyone actively telling me that I am rubbish at it (maybe they did and it just didn’t matter enough to register) but I definitely knew I am not very good and I felt no inner compulsion to work hard it either so, soon enough, I stopped the lessons.</p>

<p>While it would have been nice to be able to play well, imagine a world where, although I was no good, I’d have been told that I am amazing at it. Well, for one thing, whether you’re 10 years old or 41, we all like praise. I’d have probably kept playing (badly, while wasting my mum’s most limited funds) and maybe, I’d even develop a ridiculous dream (ridiculous in light of my lack of talent, I’m not saying NO ONE should cultivate that dream, there are, after all successful, excellent musicians that made an impact on many people, myself included) of becoming a rock star. Why is that bad? Because:</p>

<blockquote>
  <p>a. I am not talented at playing the guitar and could never, no matter how hard I worked, become amazing enough to succeed professionally</p>
</blockquote>

<blockquote>
  <p>b. Because of my desire to excel, I’d be immensely frustrated by a.</p>
</blockquote>

<blockquote>
  <p>c. I would have given less attention to something that I AM good at, love and make a decent living from, to wit: programming</p>
</blockquote>

<p>Now, is it unfortunate that my mother did not encourage my natural tendencies? Very. Would I have become a happier adult if she had? ABSOLUTELY. But, would taking the opposite approach and encouraging me to stick with something I am clearly NOT good at resulted in a better outcome? ABSOLUTELY NOT.</p>

<p>And the same is true to how we engage with adults in all capacities. We should never tell someone they’re amazing at something when they’re not just to be “nice” or to avoid insulting them. Misleading someone isn’t nice. It’s counterproductive; to their well-being, yours, the organisation and society at large.</p>

<p>We SHOULD expect people to read manuals and learn on their own. We should not spoon-feed them and, if we’ve given them ample opportunity to perform and they haven’t, we should tell them so. I’m not saying we should be brutal and make them cry, of course. Giving constructive criticism is a very complex art, indeed, one that managers, teachers and parents should invest time in perfecting, but avoiding it because it doesn’t feel “nice” is not a solution.</p>

<p>Here’s to applying yourself,</p>

<p><img src="https://medium.com/_/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=f956308ebe97" alt="" /></p>]]></content><author><name>Jesse Portnoy</name></author><summary type="html"><![CDATA[This is a story of the decline of our standards. I expect the opinions I express here may be controversial but honestly? I don’t mind it. I’m all for discussion and while I’d rather only receive supportive comments (anyone that says they want to be disagreed with is lying), I’m open to opposing opinions as well, so long as we keep it polite and on point.]]></summary></entry></feed>