[{"categories":["Dev Tooling"],"collections":null,"content":"If you’ve been around a project for any length of time, you’re familiar with the need for automation – specifically, something to keep track of all the little tasks that need to be done over and over again. These tasks are typically at least somewhat deterministic, don’t vary from run to run, and are run by more than one person. The classic solution here is to use make and a Makefile. For older codebases where you’re only worried about building C or C++ code, this is very acceptable – it’s what make was designed for, after all. make is nearly universally available on Unix-like systems, and it’s relatively simple to use for basic tasks. The challenge comes in when you need to execute non-deterministic tasks; things like running tests, linting code, bringing test environments up and down, and so on. make can handle these tasks but it’s not always the best tool for the job. 1 ","date":"2025-12-28","objectID":"/2025/12/taking-make-to-task/:0:0","tags":["task","make","taskfile","task runners"],"title":"Task and Taskfiles","uri":"/2025/12/taking-make-to-task/#"},{"categories":["Dev Tooling"],"collections":null,"content":"task, a modern job-runner Note This is likely to be the first in an irregular series on task. Roughly 8 months ago, a colleague and friend introduced me to the task program and the Taskfiles it uses. task is a general purpose task runner; a system designed to trivially enable writing, documenting, and running tasks. More YAML, yes, but that’s hardly unusual these days, but more importantly – no arcane incantations, no mystical recipes. Just a well-designed YAML schema and straight-forward embedded bash. ","date":"2025-12-28","objectID":"/2025/12/taking-make-to-task/:1:0","tags":["task","make","taskfile","task runners"],"title":"Task and Taskfiles","uri":"/2025/12/taking-make-to-task/#task-a-modern-job-runner"},{"categories":["Dev Tooling"],"collections":null,"content":"jobs, targets and dependencies Both make and task define each job as a “target” (though task prefers to call them… “tasks”). Both allow for the chaining of tasks, either implicitly or explicitly. I’m going to set aside how make dependencies work for the moment – there being more books, papers, and articles on that topic than I’m willing to even hazard a guess at enumerating – and take a look at how task handles it. Generally speaking, task dependencies can be broken out into: Does this task need to be run? Did the source change; or Does this generic test fail? This other task needs to be run before/after/during. The former of which can be thought of as “self dependency”, while the latter is more typically what we think of as a dependency. I’m not going to regurgitate the documentation here, as, well, if you’re reading this you’re not likely to appreciate that very much, but I do want to give a couple examples that illustrate how powerful this approach is. Both of these forms may be combined or used in dependent tasks to support useful workflows. For example, let’s say you have a golang project, one that requires certain services to be spun up beforehand before you can attempt to run/test the built binary. Note that we have a couple different dependency types here: does the binary need to be (re)built? Is the test service up? --- version: \"3\" tasks: # executes \"cmds\" only if task determines that \"thingie\" needs to be # rebuilt from the sources listed build: desc: 'Build it!' sources: - '**/*.go' - go.mod - go.sum generates: - thingie cmds: - cmd: go build -o thingie service-up: desc: Bring up a service we need for... reasons status: - systemctl is-active ollama.service cmds: - cmd: systemctl start ollama.service - task: service-pull service-pull: desc: Something we need to do and may want to do independently deps: - service-up vars: MODEL: 'hf.co/ibm-granite/granite-4.0-h-micro-gguf:latest' status: - 'ollama list | grep -q {{.MODEL}}' cmds: - cmd: 'ollama pull {{.MODEL}}' run: desc: Run the test/app/whatever deps: - build - service-up cmds: - cmd: ./thingie ... In the above, note how: build will only execute if any of the sources change (much as make does). service-up only attempts to start the service if it isn’t already started. (Agreed, with systemd this isn’t a huge savings, but imagine if the service is running inside a container launched by podman, etc) service-up declares a “run-time” dependency on service-pull. service-pull only attempts to pull the model if it is not already present on the system. The net result of this is a set of tasks flexible enough that we’re able to run each step individually (if we so chose), but with expressive enough dependency information that we’re not going to have tasks failing due to a dependency not being built / started / fetched / etc. Enjoy! For an excellent example of both the power and the pitfalls of using make for non-build tasks, see the Makefile generated by the kubebuilder project. ↩︎ ","date":"2025-12-28","objectID":"/2025/12/taking-make-to-task/:2:0","tags":["task","make","taskfile","task runners"],"title":"Task and Taskfiles","uri":"/2025/12/taking-make-to-task/#jobs-targets-and-dependencies"},{"categories":["tradecraft"],"collections":null,"content":"Linting is an one of those things that can be weirdly controversial. I say weirdly as I cannot quite understand the objections to it – it’s not like we’re all perfect typists, after all. Generally the objections range from “My editor does it for me” to “I forget to run them before pushing!” to my favorite, “They always fail in CI!” None of those are legitimate reasons to avoid linting. We call it “linting” because it’s like the lint on your clothes: small, annoying, seems to spontaneously self-incarnate, and is trivial to remove. Linting is one of the things computers excel at. ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:0:0","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#"},{"categories":["tradecraft"],"collections":null,"content":"What is Linting? Simply put: Linting is any automated process that enforces a deterministic, agreed-on set of coding, format, style, or other technical standards. Tip The content you’re linting can have a great impact on the tools you use to lint, and what you can lint. For example, a statically typed language like Go can be linted in ways that a dynamically typed language like Perl cannot. Linting is: Spell-checking Validating data file formats Validating end-of-file and other platform issues Static code checkers / analysis Code style checks Ensuring license headers are applied everywhere Other fully automated checks for formatting, style, etc, that can be run without human intervention Linting is not: Code reviews Manual testing User acceptance testing Anything requiring human intervention Linting may be: Running unit tests Building binaries (to validate it actually builds, not deployment) …that’s really up to you. Linters may also suggest fixes that can then be evaluated and applied, if appropriate. This is not necessary, but is very convenient. ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:1:0","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#what-is-linting"},{"categories":["tradecraft"],"collections":null,"content":"How do we lint? All effective linting implementations I’ve seen have the following characteristics: They use a linting framework They can be run locally They are run and enforced in CI/CD ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:2:0","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#how-do-we-lint"},{"categories":["tradecraft"],"collections":null,"content":"Linting Frameworks It’s certainly possible to “roll your own” approach here, but why? While there aren’t a huge number of linting frameworks, all you really need is one good, general-purpose one. (Which is why there aren’t a huge number of them.) Pre-commit is an excellent example of a linting framework. It is Open Source, is widely used, has a large number of pre-built hooks, and is trivially extensible. Additionally, it is well-documented, well-supported, and actively maintained. It’s name implies a certain workflow: it is designed to be run before each commit. However, this is not the case. It is a general-purpose linting framework with built-in support for Git hooks, but it can be run at any time. pre-commit/pre-commit Public A framework for managing and maintaining multi-language pre-commit hooks. Python 15k 927 pre-commit/pre-commit-hooks Public Some out-of-the-box hooks for pre-commit Python 6.4k 777 Pre-commit linters (“hooks”) Note Pre-commit “hooks” are the individual linters that are run by the pre-commit framework; they should not be confused with Git hooks. I’ve included the base, general pre-commit-hooks repository above. Note that this is not the only repository of hooks available – there are many others, including some that are specific to a particular language or tool. Here’s a few examples of pre-commit hooks that I’ve found useful: TekWizely/pre-commit-golang Public Pre-commit hooks for Golang with support for monorepos, the ability to pass arguments and environment variables to all hooks, and the ability to invoke custom go tools. Shell 357 41 adrienverge/yamllint Public A linter for YAML files. Python 3.3k 307 syntaqx/git-hooks Public A collection of git hooks for use with pre-commit Shell 34 16 hadolint/hadolint Public Dockerfile linter, validate inline bash, written in Haskell Haskell 12k 487 python-jsonschema/check-jsonschema Public A CLI and set of pre-commit hooks for jsonschema validation with built-in support for GitHub Workflows, Renovate, Azure Pipelines, and more! Python 299 57 IBM/detect-secrets Public Fork from Yelp/detect-secrets An enterprise friendly way of detecting and preventing secrets in code. Python 84 53 Lucas-C/pre-commit-hooks Public git pre-commit hooks Python 151 53 terraform-docs/terraform-docs Public Generate documentation from Terraform modules in various output formats Go 4.7k 587 Pre-commit hooks are also surprisingly easy to create. ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:2:1","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#linting-frameworks"},{"categories":["tradecraft"],"collections":null,"content":"Linting Frameworks It’s certainly possible to “roll your own” approach here, but why? While there aren’t a huge number of linting frameworks, all you really need is one good, general-purpose one. (Which is why there aren’t a huge number of them.) Pre-commit is an excellent example of a linting framework. It is Open Source, is widely used, has a large number of pre-built hooks, and is trivially extensible. Additionally, it is well-documented, well-supported, and actively maintained. It’s name implies a certain workflow: it is designed to be run before each commit. However, this is not the case. It is a general-purpose linting framework with built-in support for Git hooks, but it can be run at any time. pre-commit/pre-commit Public A framework for managing and maintaining multi-language pre-commit hooks. Python 15k 927 pre-commit/pre-commit-hooks Public Some out-of-the-box hooks for pre-commit Python 6.4k 777 Pre-commit linters (“hooks”) Note Pre-commit “hooks” are the individual linters that are run by the pre-commit framework; they should not be confused with Git hooks. I’ve included the base, general pre-commit-hooks repository above. Note that this is not the only repository of hooks available – there are many others, including some that are specific to a particular language or tool. Here’s a few examples of pre-commit hooks that I’ve found useful: TekWizely/pre-commit-golang Public Pre-commit hooks for Golang with support for monorepos, the ability to pass arguments and environment variables to all hooks, and the ability to invoke custom go tools. Shell 357 41 adrienverge/yamllint Public A linter for YAML files. Python 3.3k 307 syntaqx/git-hooks Public A collection of git hooks for use with pre-commit Shell 34 16 hadolint/hadolint Public Dockerfile linter, validate inline bash, written in Haskell Haskell 12k 487 python-jsonschema/check-jsonschema Public A CLI and set of pre-commit hooks for jsonschema validation with built-in support for GitHub Workflows, Renovate, Azure Pipelines, and more! Python 299 57 IBM/detect-secrets Public Fork from Yelp/detect-secrets An enterprise friendly way of detecting and preventing secrets in code. Python 84 53 Lucas-C/pre-commit-hooks Public git pre-commit hooks Python 151 53 terraform-docs/terraform-docs Public Generate documentation from Terraform modules in various output formats Go 4.7k 587 Pre-commit hooks are also surprisingly easy to create. ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:2:1","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#pre-commit-linters-hooks"},{"categories":["tradecraft"],"collections":null,"content":"When do we lint? Tip Generally speaking, at a minimum one should lint before submitting a changeset for review (that is, creating a pull request). This is a minimum level of courtesy to your peers. I always lint before pushing – setting up a pre-push hook tends to make this impossible to forget. Others have other preferences, and that’s fine: running the linters in CI/CD ensures that they cannot be forgotten. (…and that the reviewer does not need to enforce them.) A successful linting workflow tends to run like this: Local Lint at some point before pushing CI Lint as the first pipeline job on every push Fail the entire pipeline early if linting fails ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:3:0","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#when-do-we-lint"},{"categories":["tradecraft"],"collections":null,"content":"Linting Locally Running locally is both critical and not essential. It’s critical in that if you push a change that fails linting, your changeset will fail CI. Similarly, it is not essential as you can always push a change, wait for CI to fail and then fix it. Your call. As Pre-commit’s name implies, it was originally designed to be run before you make a commit – e.g. as a pre-commit hook. However, with modern Git workflows this can be quite time-consuming. (Imagine having to wait for your linters to run every time you create a fixup! commit – it gets old fast.) A happy balance between “every time” and “never” is to run it before you push, typically as a pre-push hook. pre-commit install --install-hooks --hook-type pre-push Regardless of the tool used to lint, leveraging a pre-push hook helps ensure that you catch any linting errors before anyone else sees them. ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:3:1","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#linting-locally"},{"categories":["tradecraft"],"collections":null,"content":"Lint in CI Warning I cannot emphasize this enough: Enforce your lint rules in CI. People didn’t stop littering because it was the right thing to do. They stopped because they were fined for it. It is imperative that you lint in CI. Not linting in CI defeats the whole purpose of linting. We lint to catch the (typically) trivial mistakes we may make and we run CI to automate testing for mistakes. If we believed every changeset to be perfect, we wouldn’t have CI in the first place. If you find your CI/CD pipelines failing frequently due to linting errors, you may want to examine some of your own practices. (Like, say, setting up a pre-push hook?) If you do not enforce your linting rules in CI, you’re effectively forcing other people to do it for you. This is not only rude, but it’s also a waste of time and resources. ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:3:2","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#lint-in-ci"},{"categories":["tradecraft"],"collections":null,"content":"What do we lint? Tip Linting is not just for code. It can be used to validate data files, check your spelling, and even ensure that your documentation is up-to-date. Simple answer? We lint everything we can. If you can lint it, you should. Linting is much like brushing your teeth or washing your hands – it’s a simple, effective way to prevent problems. Also, it’s really gross, obvious to even the most casual of observers, and just downright lazy1 if you don’t. If you’re using pre-commit, there are a huge number of pre-built hooks you can use as well as a large number of third-party hooks. It is also easy to write your own, should the need arise. Important things to lint include: platform-specific line and file endings are consistent data files (e.g. JSON, YAML, TOML) are not malformed code is legal and styled consistently the project is buildable (e.g. it is “complete” and coherent) secrets have not been committed ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:4:0","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#what-do-we-lint"},{"categories":["tradecraft"],"collections":null,"content":"You have to start somewhere Tip Take the Boy Scout approach to linting: Every time you find lint, leave the project a little cleaner (and the linter a little smarter) than you found it. It’s not often to possible to know all the things you need to lint from the start. Much as with testing, you’ll find additional things to lint for as your project moves forward – especially as additional people, perhaps using different operating systems, editors, or tooling, begin to contribute. I’ve found it useful to treat linting like other testing processes: Lint (test) all the obvious things right off the bat; and Add additional linting as you find issues. For example, it’s fairly obvious in a Go project that running gofmt is a good idea. It is perhaps less obvious that you should lint your Markdown README.md to ensure it renders correctly – until someone pushes one that does not. Similarly, it’s fairly obvious that one should lint YAML to ensure it’s valid, but less obvious that you should enforce a consistent style of indenting lists. ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:4:1","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#you-have-to-start-somewhere"},{"categories":["tradecraft"],"collections":null,"content":"Finally… Linting is a simple, effective way to ensure that your project is clean and consistent. We created computers to help us with automated, repetitive, deterministic tasks like this – tasks that humans have a tendency to forget or overlook. Automated, enforced linting is a simple and effective way to instantly raise and ensure the overall hygiene and quality of any project. Just do it! Linting is only ever a big deal when it is not done. I mean, come on. This isn’t the good type of lazy – this is the bad type that imperils your Hubris. ↩︎ ","date":"2024-10-29","objectID":"/2024/10/linting-thoughts/:5:0","tags":["pre-commit","hygene","ci-cd","development"],"title":"Linting Thoughts","uri":"/2024/10/linting-thoughts/#finally"},{"categories":["GitLab","Infrastructure as Code"],"collections":null,"content":"I’m a huge fan of terraform, so when I needed to build out cloud infrastructure for GitLab CI/CD it was the first thing I reached for. The native terraform-provider-gitlab was very useful, but left out one critical detail: it was not possible to register a runner. Ouch. 💢 This left a rather annoyingly awkward gap in my terraform configurations, as I’d need to provision a runner token outside of terraform. I messed around with this, coming up with a couple… interesting approaches, but ultimately I realized that the only proper solution to this (read: that wasn’t just a giant hack) would require writing a provider. ","date":"2022-01-14","objectID":"/2022/01/terraform-provider-gitlabci/:0:0","tags":["terraform providers","gitlab runners","terraform-provider-gitlabci","GitLab","terraform","IaC"],"title":"terraform-provider-gitlabci: Register GitLab CI Runners","uri":"/2022/01/terraform-provider-gitlabci/#"},{"categories":["GitLab","Infrastructure as Code"],"collections":null,"content":"terraform-provider-gitlabci Recently I’ve had some free time, so I cleaned it up a bit and published it. terraform registry (docs, etc)  source hosted at GitLab ","date":"2022-01-14","objectID":"/2022/01/terraform-provider-gitlabci/:1:0","tags":["terraform providers","gitlab runners","terraform-provider-gitlabci","GitLab","terraform","IaC"],"title":"terraform-provider-gitlabci: Register GitLab CI Runners","uri":"/2022/01/terraform-provider-gitlabci/#terraform-provider-gitlabci"},{"categories":["GitLab","Infrastructure as Code"],"collections":null,"content":"A quick example Documentation and the like can be found over at the terraform registry, but here’s a quick example with only a minimum of hand-wavey: terraform { required_providers { gitlabci = { source = \"registry.terraform.io/rsrchboy/gitlabci\" } gitlab = { source = \"registry.terraform.io/gitlabhq/gitlab\" } } } provider \"gitlabci\" { } provider \"gitlab\" { } data \"gitlab_project\" \"this\" { id = \"rsrchboy/terraform-provider-gitlabci\" } resource \"gitlabci_runner_token\" \"this\" { registration_token = data.gitlab_project.this.runners_token locked = true tags = [ \"jinx\", \"powder\", \"cupcake\", ] } output \"token\" { sensitive = true value = gitlabci_runner_token.this.token } Note how, using both the gitlab and gitlabci providers we can now register GitLab runners. The example shows us using a registration token obtained from a project data source, but terraform-provider-gitlabci doesn’t care if it’s a project, group, or even instance registration token. Additionally, while the gitlab provider does require API access, the gitlabci provider only requires a valid registration token. Enjoy! ","date":"2022-01-14","objectID":"/2022/01/terraform-provider-gitlabci/:2:0","tags":["terraform providers","gitlab runners","terraform-provider-gitlabci","GitLab","terraform","IaC"],"title":"terraform-provider-gitlabci: Register GitLab CI Runners","uri":"/2022/01/terraform-provider-gitlabci/#a-quick-example"},{"categories":["Random tidbits"],"collections":null,"content":"Lately I’ve found myself routinely attaching Docker-based containers to multiple networks. This has led to a couple… interesting surprises. It doesn’t seem to be well documented (AFAIK), so here’s what I’ve learned. This isn’t a big serious how-to post, just something that irked me. 😄 ","date":"2022-01-08","objectID":"/2022/01/docker-container-network-interface-ordering/:0:0","tags":["docker","docker-compose","container networking","container routing"],"title":"Docker container network interface ordering","uri":"/2022/01/docker-container-network-interface-ordering/#"},{"categories":["Random tidbits"],"collections":null,"content":"Tooling (a little background) Observed behavior As this is observed behavior rather than documented (AFAICT, at any rate), it may change without warning. Oh well. For the purposes of this article, let’s say we’re using the following tools: docker docker-compose ","date":"2022-01-08","objectID":"/2022/01/docker-container-network-interface-ordering/:1:0","tags":["docker","docker-compose","container networking","container routing"],"title":"Docker container network interface ordering","uri":"/2022/01/docker-container-network-interface-ordering/#tooling-a-little-background"},{"categories":["Random tidbits"],"collections":null,"content":"Network name determines ordering What’s the actual network name? When using Compose, there are really two network “names” for each network. The name specified in the Compose configuration; and The name Compose uses when creating the Docker network. In a compose configuration, we always use #1 to refer to a network; however behind the scenes this will be different from the name of the network as known to Docker. Networks are attached in alphabetical order. Internal networks are attached last, though still in alphabetical order. Networks appear to be attached in alphabetical order, further segregated by the internal status. Not-internal networks are attached to the container before internal networks. Let’s say we have a container with three networks attached: version: \"3.9\" services: container1: image: docker.io/containous/whoami networks: backend: {} frontend: {} apples: {} oranges: {} networks: backend: {} frontend: {} apples: internal: true oranges: name: lemons Let’s say this has a project name of services. When container1 is launched, it will see a number of interfaces: Interface Network (Compose) Network (Docker) eth0 oranges lemons eth1 backend services_backend eth2 frontend services_frontend eth3 apples services_apples Why? apples is internal, so it ends up at the end of the list. oranges has an actual name of lemons, so it ends up before the other networks with a services_ prefix. ","date":"2022-01-08","objectID":"/2022/01/docker-container-network-interface-ordering/:2:0","tags":["docker","docker-compose","container networking","container routing"],"title":"Docker container network interface ordering","uri":"/2022/01/docker-container-network-interface-ordering/#network-name-determines-ordering"},{"categories":["Random tidbits"],"collections":null,"content":"Default route is always through eth0 If you want a specific network to be the default route, you’re going to need to make sure it’s first – alphabetically. ","date":"2022-01-08","objectID":"/2022/01/docker-container-network-interface-ordering/:3:0","tags":["docker","docker-compose","container networking","container routing"],"title":"Docker container network interface ordering","uri":"/2022/01/docker-container-network-interface-ordering/#default-route-is-always-through-eth0"},{"categories":["Random tidbits"],"collections":null,"content":"Routing and macvlan interfaces This is more of a corner case. If you’re attaching containers directly to a vlan/network using, say, macvlan, and that network ends up being your default route (app_vlan, anyone?), then all traffic for networks you’re not directly connected to will be routed over this vlan. This may or may not matter, but if you have the (reasonable) expectation that non-local traffic will route through and be masqueraded by the host, this can be surprising. ","date":"2022-01-08","objectID":"/2022/01/docker-container-network-interface-ordering/:3:1","tags":["docker","docker-compose","container networking","container routing"],"title":"Docker container network interface ordering","uri":"/2022/01/docker-container-network-interface-ordering/#routing-and-macvlan-interfaces"},{"categories":["GitLab"],"collections":null,"content":"The gitlab-runner agent is very flexible, with multiple executors to handle most situations. Similarly, AWS IAM allows one to use “instance profiles” with EC2 instances, obviating the need for static, long-lived credentials. In the situation where one is running gitlab-runner on an EC2 instance, this presents us with a couple interesting challenges – and opportunities. How does one prevent CI jobs from being able to obtain credentials against the instance’s profile role? How does one allow certain CI jobs to assume credentials through the metadata service without allowing all CI jobs to assume those credentials? ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:0:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#"},{"categories":["GitLab"],"collections":null,"content":"Criteria Relevant Configuration This isn’t going to be a comprehensive, step-by-step guide that can be followed without any external knowledge or resources. Rather, we’re going to focus on what one needs to know in order to implement this solution, however you’re currently provisioning CI agents. For our purposes, we want: The gitlab-runner agent to run on an EC2 instance, with one or more runners configured.1 All configured runners should be using the Docker executor. Jobs to run, by default, without access to the EC2 instance’s profile credentials. Certain jobs to assume a specific role transparently through the EC2 metadata service by virtue of what runner picks them up. Reasonable security: Jobs can’t just specify an arbitrary role to assume No hardcoded, static, or long-lived credentials ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:1:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#criteria"},{"categories":["GitLab"],"collections":null,"content":"Only short-term, transient credentials It’s worth emphasizing this: no hardcoded, static, or long-lived credentials. Sure, it’s easy to generate an IAM user and plunk its keys in (hopefully) protected environment variables, but then you have to worry about key rotation, audits, etc, in the way one doesn’t with transient credentials. ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:1:1","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#only-short-term-transient-credentials"},{"categories":["GitLab"],"collections":null,"content":"Executor implies methodology For our purposes, we’re going to solution this using the agent’s docker executor. Other executors will have different solutions (e.g. kubernetes has tools like kiam). However, for fun let’s cheat a bit and do a quick-and-fuzzy run-through of a couple of the other executors. docker+machine executor This is largely like the plain docker executor, except that as EC2 instances will be spun up to handle jobs you can take a detour around anything complex by simply telling the agent to associate specific instance profiles with those new instances, e.g.: [[runners]] [runners.machine] MachineOptions = [ \"amazonEC2-iam-instance-profile=everything-except-the-thing\", ..., ] The instance running the gitlab-runner agent does not need to be associated with the same profile – but the agent does need to be able to EC2:AssociateIamInstanceProfile and iam:PassRole the relevant resources. The downside is that you’ll have to have multiple runners configured if you want to be able to allow different jobs to assume different roles. kubernetes executor The kubernetes executor is going to be a bit trickier, and, as ever, TMTOWTDI[^tmtowtdi]. Depending on what you’re doing, any of the following might work for you: Launch nodes with the different profiles and use constraints to pick and choose which job pods end up running on them. Use a solution like kiam. … ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:2:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#executor-implies-methodology"},{"categories":["GitLab"],"collections":null,"content":"Brute force Ever a popular option, you can just brute-force block container (job) access to the EC2 metadata service by firewalling it off, e.g.: iptables -t nat -I PREROUTING \\ --destination 169.254.169.254 --protocol tcp --dport 80 \\ -i docker+ -j REJECT If you just want to block all access from jobs, this is a good way to do it. This approach is contraindicated if you want to be able to allow some containers to access the metadata service, or to allow them to retrieve credentials of some (semi) arbitrary role. ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:3:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#brute-force"},{"categories":["GitLab"],"collections":null,"content":"EC2 metadata proxy A more flexible solution can be found by using a metadata proxy. This sort of service should be a benevolent man-in-the-middle: able to access the actual EC2 metadata service for its own credentials, able to inspect containers making requests to determine what role (if any) they should be assuming, and able to assume those roles and pass tokens back to jobs without those jobs being any the wiser about it. For our purposes, we will use go-metadataproxy2, which will handle: EC2 metadata requests made by processes in containers (e.g. CI jobs); Sourcing its own credentials from the actual EC2 metadata service; Inspecting containers for the IAM role that should be assumed (via the IAM_ROLE environment variable); Blocking direct access to the EC2 metadata service; and Assuming the correct role and providing STS tokens transparently to the contained process. The authentication flow will look something like this: sequenceDiagram autonumber participant mdp as metadataproxy participant docker participant job as CI job job-\u003e\u003emdp: client attempts to request credentials from EC2 mdp--\u003e\u003edocker: inspect job container docker--\u003e\u003emdp: \"IAM_ROLE\" is \"foobar\" mdp--\u003e\u003emdp: STS tokens for role \"foobar\" mdp-\u003e\u003ejob: STS tokens for assumed role \"foobar\" returned sequenceDiagram autonumber participant mdp as metadataproxy participant docker participant job as CI job job-\u003e\u003emdp: client attempts to request credentials from EC2 mdp--\u003e\u003edocker: inspect job container docker--\u003e\u003emdp: \"IAM_ROLE\" is \"foobar\" mdp--\u003e\u003emdp: STS tokens for role \"foobar\" mdp-\u003e\u003ejob: STS tokens for assumed role \"foobar\" returned sequenceDiagram autonumber participant mdp as metadataproxy participant docker participant job as CI job job-\u003e\u003emdp: client attempts to request credentials from EC2 mdp--\u003e\u003edocker: inspect job container docker--\u003e\u003emdp: \"IAM_ROLE\" is \"foobar\" mdp--\u003e\u003emdp: STS tokens for role \"foobar\" mdp-\u003e\u003ejob: STS tokens for assumed role \"foobar\" returned sequenceDiagram autonumber participant mdp as metadataproxy participant docker participant job as CI job job-\u003e\u003emdp: client attempts to request credentials from EC2 mdp--\u003e\u003edocker: inspect job container docker--\u003e\u003emdp: \"IAM_ROLE\" is \"foobar\" mdp--\u003e\u003emdp: STS tokens for role \"foobar\" mdp-\u003e\u003ejob: STS tokens for assumed role \"foobar\" returned This also means that the instance profile role must be able to assume the individual roles we want to allow jobs to assume, and the trust policy of the individual roles must allow the instance profile role to assume them. In short: The instance profile’s IAM role policy should only permit certain roles to be assumed, either by ARN or some sensible condition (tagged in a certain way, etc). Roles in the account, in general, should not blindly trust any principal in the account to assume them.3 ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:4:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#ec2-metadata-proxy"},{"categories":["GitLab"],"collections":null,"content":"Configuring the CI agent correctly Take care when registering the runner We’re not going to cover it here, but take care when registering the runner. Under this approach, judiciously restricting access to the runner is a critical part of controlling what jobs may run with elevated IAM authority. Keep a couple things in mind: Registering runners is cheap; better to have more runners for more granular security than allow projects / pipelines with no need for access to use them. Runners can be registered at the project, group, or (unless you’re on gitlab.com) the instance level; register them as precisely as your requirements allow. Runner access can be further restricted and combined with project/group access by allowing them to run against protected refs only, and then restricting who can push/merge to protected branches (including protected tags) to trusted individuals. Always set IAM_ROLE in the runner configuration Anything that allows a pipeline author to control what role the proxy assumes is a security… concern. In this context, IAM_ROLE can be set on the container in one of several ways (in order of precedence): Through the runner configuration; By the pipeline author; or By the creator of the image. Unless you intend to allow the pipeline author to specify the role to assume, it is recommended that IAM_ROLE always be set in the runner configuration file, config.toml. If you don’t want any role to be assumed, great, set the variable to a blank value. go-metadataproxy discovers the role to assume by interrogating the docker daemon, inspecting the container of the process seeking credentials from the EC2 metadata service. It does this by looking for the value of the IAM_ROLE environment set on the container. IAM_ROLE must be set on the container itself. While whitelisting the list of allowed images isn’t a terrible idea, the safest and most reliable way of controlling this as the administrator of the runner is to simply set the environment variable as part of the runner configuration. [[runners]] environment = [ \"IAM_ROLE=some-role-name-or-arn\", ..., ] This also means that we’re going to want a runner configuration per IAM role. (Not terribly surprising, I would hope.) ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:5:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#configuring-the-ci-agent-correctly"},{"categories":["GitLab"],"collections":null,"content":"Running the metadata proxy This is reasonably straight-forward, in two parts. There are a number of ways to run it, but as we’re doing this in a docker environment anyways, why not let it handle all the messy bits for us? $ git clone https://github.com/jippi/go-metadataproxy.git $ cd go-metadataproxy $ docker build -t local/go-metadataproxy:latest . $ docker run \\ --detach \\ --restart=always \\ --net=host \\ --name=metadataproxy \\ -v /var/run/docker.sock:/var/run/docker.sock \\ -e AWS_REGION=us-west-2 \\ -e ENABLE_PROMETHEUS=1 \\ local/go-metadataproxy:latest ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:6:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#running-the-metadata-proxy"},{"categories":["GitLab"],"collections":null,"content":"Using the metadata proxy To use the proxy, the containers must be able to reach it in the same way they would reach the actual EC2 metadata endpoint. We need to prevent requests to the metadata endpoint from reaching the actual endpoint, and instead be transparently redirected to the proxy. (That is, we’re going to play Faythe4 here) To “hijack” container requests to the EC2 metadata service, a little iptables magic is in order. This is well described in the project’s README. I’m including it here as well for completeness’ sake, and with one small change: instead of redirecting connections off of docker0, we reconnect any off of docker+. (If you’re using the runner’s network per build functionality, you may need to tweak this.) As we’re exposing the metadataproxy on port 8000, you’ll want to make sure that port is firewalled off from the outside; either via iptables or a security group. # this makes an excellent addition to /etc/rc.local LOCAL_IPV4=$(curl http://169.254.169.254/latest/meta-data/local-ipv4) /sbin/iptables \\ --append PREROUTING \\ --destination 169.254.169.254 \\ --protocol tcp \\ --dport 80 \\ --in-interface docker+ \\ --jump DNAT \\ --table nat \\ --to-destination $LOCAL_IPV4:8000 \\ --wait /sbin/iptables \\ --wait \\ --insert INPUT 1 \\ --protocol tcp \\ --dport 80 \\ \\! \\ --in-interface docker0 \\ --jump DROP ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:0:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#using-the-metadata-proxy"},{"categories":["GitLab"],"collections":null,"content":"IAM role requirements ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:1:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#iam-role-requirements"},{"categories":["GitLab"],"collections":null,"content":"EC2 Instance Profile The role belonging to the instance profile associated with the instance our agent lives on should be able to assume the roles we want to allow CI jobs to assume. Specifically, the trust policy must permit iam:GetRole and sts:AssumeRole on these roles. If you’re using S3 for shared runner caches, you may wish to permit this access through the instance profile role as well. (Implemented properly, the proxy will not permit direct CI jobs to use this role.) ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:1:1","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#ec2-instance-profile"},{"categories":["GitLab"],"collections":null,"content":"Container / Job IAM roles for assumption As before, only containers with IAM_ROLE set at the container level will have tokens returned to them by the metadata proxy5, and then only if the proxy can successfully assume and convince STS to issue tokens for them. For this to happen, the container/job role’s trust policy must alllows the role of the instance profile associated with the EC2 instance to assume them. Specifically, the trust policy must permit iam:GetRole and sts:AssumeRole. ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:1:2","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#container--job-iam-roles-for-assumption"},{"categories":["GitLab"],"collections":null,"content":"Profit! Alright! You should now have a good idea as to how create and run CI jobs that: CANNOT request tokens directly from the EC2 metadata service CANNOT implicitly assume the EC2 instance profile’s role CANNOT leak static or long-lived credentials CAN transparently assume certain specific roles Enjoy :) The nomenclature gets a bit tricky here. gitlab-runner The agent responsible for running one or more runner configurations. A “runner” A single runner configuration being handled by the gitlab-runner agent. An entity that can run CI jobs, from the perspective of the CI server (e.g. gitlab.com proper).  ↩︎ Lyft also has an excellent tool at https://github.com/lyft/metadataproxy. I’ve used it with success, but go-metadataproxy provides at least rudimentary metrics for scraping. ↩︎ Not that anyone would ever create a trust policy like that, or that it would be one of the defaults offered by the AWS web console. Nope. That would never happen. ↩︎ https://en.wikipedia.org/wiki/Alice_and_Bob ↩︎ Unless, of course, the metadata proxy is configured with a default role – but we’re not going to do that here. ↩︎ ","date":"2020-10-05","objectID":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/:0:0","tags":["aws","iam","ec2","instance profiles","security","ci/cd","metadata proxy","go-metadataproxy","gitlab"],"title":"Using a Metadata Proxy to Limit AWS/IAM Access with GitLab CI","uri":"/2020/10/using-a-metadata-proxy-for-secure-iam-in-gitlab-ci/#profit"},{"categories":["GitLab"],"collections":null,"content":"GitLab makes a great deal of information available through Prometheus metrics. But not everything. sql_exporter Since writing this article, it has come to my attention that there are two more generic “sql_exporter” options. This is less important with GitLab, as you’re basically going to be running Pg, but these are the generic SQL exporter we’re turning postgres_exporter into, below. https://github.com/free/sql_exporter https://github.com/justwatchcom/sql_exporter The other day I was looking for how many CI jobs and pipelines had been created, total. I figured that would be somewhere in the collection of existing metrics, but the closest I could find was a metric that gave the totals relative to the last time the exporter was restarted. I use a couple GitLab-specific exporters in this environment already, and thought about creating another one to handle this. As it turns out, this information isn’t exposed through the API, either. It looked like the only way to get this information was to query the database directly. ","date":"2020-08-27","objectID":"/2020/08/additional-gitlab-metrics-with-pg-exporter/:0:0","tags":["gitlab","prometheus","pg_exporter","metrics"],"title":"Additional GitLab Metrics using `pg_exporter`","uri":"/2020/08/additional-gitlab-metrics-with-pg-exporter/#"},{"categories":["GitLab"],"collections":null,"content":"--extend.query-path While poking around, I noticed that postgres_exporter has an interesting flag, --extend.query-path. –extend.query-path Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. I did as suggested and checked out the queries.yml. Turns out it’s surprisingly easy to create new metrics out of database queries, e.g.: ci_builds: query: \"SELECT MAX(id) as total from ci_builds\" metrics: - total: usage: \"COUNTER\" description: \"Total builds created\" ci_pipelines: query: \"SELECT MAX(id) as total from ci_pipelines\" metrics: - total: usage: \"COUNTER\" description: \"Total pipelines created\" The above causes two additional metrics to be generated by the exporter: ci_builds_total and ci_pipelines_total. Neat. To get the information I want, all I need to do is ask postgres_exporter nicely for it. ","date":"2020-08-27","objectID":"/2020/08/additional-gitlab-metrics-with-pg-exporter/:1:0","tags":["gitlab","prometheus","pg_exporter","metrics"],"title":"Additional GitLab Metrics using `pg_exporter`","uri":"/2020/08/additional-gitlab-metrics-with-pg-exporter/#--extendquery-path"},{"categories":["GitLab"],"collections":null,"content":"Configuring the exporter The GitLab Omnibus package sets up a number of exporters, including postgres_exporter with --extend.query-path already set. However, messing around with a configuration file the omnibus package is responsible for did not sound like fun, and neither did I want to cause the same Pg metrics to be exported twice. Examining the exporter’s flags again, I see two that may help. postgresql_exporter documentation disable-default-metrics Use only metrics supplied from queries.yaml via --extend.query-path. disable-settings-metrics Use the flag if you don’t want to scrape pg_settings. Looking at those two flags, it appears that I should be able to disable the “standard” metrics and only run the ones I provide in the query file. ","date":"2020-08-27","objectID":"/2020/08/additional-gitlab-metrics-with-pg-exporter/:2:0","tags":["gitlab","prometheus","pg_exporter","metrics"],"title":"Additional GitLab Metrics using `pg_exporter`","uri":"/2020/08/additional-gitlab-metrics-with-pg-exporter/#configuring-the-exporter"},{"categories":["GitLab"],"collections":null,"content":"Running the exporter To me, it seems easiest to run our custom postgresql_exporter in parallel with the GitLab supplied one. Running it in a container also allows us to ensure it keeps running (--restart) and runs as the correct user/group for access. You get to keep both parts This involves configuring a tool for direct access to your GitLab instance’s database. While the postgres_exporter is a widely-used and reliable tool, if you break your server you get to keep both parts. A small scriptie to start the exporter, disable standard metrics, and run ours is below. Note that it also ensures it is run as the correct user/group for database access, and the Pg socket + configuration is bind-mounted inside the container for access. PG_USER=\"${PG_USER:-gitlab-psql}\" PG_UID=\"$(id -u $PG_USER)\" PG_GID=\"$(id -g $PG_USER)\" docker run -d \\ --name gitlab-custom-metrics \\ --restart unless-stopped \\ --user $PG_UID:$PG_GID \\ --publish 19187:9187 \\ -v /var/opt/gitlab/postgresql:/var/opt/gitlab/postgresql \\ -v `pwd`/queries.yml:/queries.yml:ro \\ -e DATA_SOURCE_NAME=\"user=$PG_USER host=/var/opt/gitlab/postgresql database=gitlabhq_production\" \\ wrouesnel/postgres_exporter \\ --disable-default-metrics \\ --disable-settings-metrics \\ --extend.query-path /queries.yml With that, the metrics exporter is exposed and ready to be scraped at localhost:19187/metrics. # HELP ci_builds_total Total builds created # TYPE ci_builds_total counter ci_builds_total{server=\"/var/opt/gitlab/postgresql:5432\"} 437695 # HELP ci_pipelines_total Total pipelines created # TYPE ci_pipelines_total counter ci_pipelines_total{server=\"/var/opt/gitlab/postgresql:5432\"} 67665 ","date":"2020-08-27","objectID":"/2020/08/additional-gitlab-metrics-with-pg-exporter/:3:0","tags":["gitlab","prometheus","pg_exporter","metrics"],"title":"Additional GitLab Metrics using `pg_exporter`","uri":"/2020/08/additional-gitlab-metrics-with-pg-exporter/#running-the-exporter"},{"categories":["GitLab"],"collections":null,"content":"Conclusion With this in place, we can collect and display or alert on these custom metrics. And, of course, everyone loves a good dashboard graph: This might seem like a lot for two small metrics, but compared to writing a custom exporter it’s nothing. If you’re like me, you’ll also discover your queries.yml will quickly grow with additional metrics definitions. ","date":"2020-08-27","objectID":"/2020/08/additional-gitlab-metrics-with-pg-exporter/:4:0","tags":["gitlab","prometheus","pg_exporter","metrics"],"title":"Additional GitLab Metrics using `pg_exporter`","uri":"/2020/08/additional-gitlab-metrics-with-pg-exporter/#conclusion"},{"categories":["AWS"],"collections":null,"content":"At $work, I’ve been using KMS to encrypt s3 bucket contents for some time now. It works rather well, but one thing that had been bugging me is that our IAM policies granted both read permissions on bucket objects and encrypt/decrypt on the relevant KMS key. That is, principals with the policies attached can use the key to encrypt/decrypt anything they otherwise have permission to access, not just objects in the bucket. It didn’t appear that there was a reasonable way to tighten this until I ran across references to the IAM kms:EncryptionContext: condition. Using kms:EncryptionContext: it is possible to conditionally restrict a policy based on the ARN of the resource being acted upon. That is to say, one can use this condition to only allow a KMS key to be used to decrypt objects in a certain s3 bucket. It took me a bit to figure this out as the docs didn’t quite spell out how to use an ARN as an encryption context (and, you know, encryption), so here’s a policy that shows it in action. The policy allows actions a certain KMS key to be used only in the context of the given bucket: These actions are only granted on objects in the specific s3 bucket and not denied explicitly to anything else. This is important, as we don’t want our efforts here to block a legitimate grant somewhere else. This can be extended to apply to multiple buckets by changing the condition test to ForAnyValue:StringLike. ","date":"2020-08-23","objectID":"/2020/08/kms-key-context-and-s3/:0:0","tags":["kms","iam","s3","aws","encryption context"],"title":"KMS key context, IAM conditions, and s3","uri":"/2020/08/kms-key-context-and-s3/#"},{"categories":["Dev Tooling"],"collections":null,"content":"git has always(?) allowed for additional configuration files to be unconditionally included: [include] path = path/to/gitconfig Each individual git repo has always had the ability to maintain its own configuration at .git/config. However, sometimes on our systems we also have certain locations where we store multiple git projects, which may need different configuration from the global, but still common across that location. Since … well, for the last year or two at least, git has allowed for the conditional inclusion of configuration files. For example, I contribute to F/OSS projects using one email address, which lives in my global git config . However, for work projects, I want to use my work email everywhere — and accidentally pushing w/my personal email address is just embarrassing. All of my work projects live under a certain directory, so I can tell git that if a given repository’s gitdir lives under ~/work, it should also load an additional configuration file: [includeIf \"gitdir:~/work/\"] path = ~/work/gitconfig …and in there, I can set ; this is ~/work/gitconfig [user] email = cweyl@work.com In this way, I do not need to remember to change the email address of any repos I clone under ~/work to my work address. This is especially useful as I not infrequently find myself forking and submitting bugfix PR/MR’s to upstream, and if I do that for $work then I want to be using my work email address. See also the “Includes” and “Conditional Includes” sections of the git-config manpage. ","date":"2019-03-21","objectID":"/2019/03/conditional-git-includes/:0:0","tags":["git","work","sanity"],"title":"Conditional git Configuration","uri":"/2019/03/conditional-git-includes/#"},{"categories":null,"collections":null,"content":"…um, kinda. I’m switching over from wordpress to Statocles, and porting my older posts over. Still, who can resist “first post!” ;) ","date":"2018-02-28","objectID":"/2018/02/first-post/:0:0","tags":["yak-shaving","statocles"],"title":"First Post","uri":"/2018/02/first-post/#"},{"categories":["Dev Tooling"],"collections":null,"content":"fzf is a fantastic utility, written by an author with a history of writing useful things. He’s also a vim user, and in addition to his other vim plugins he has created an “enhancement” plugin called fzf.vim. One of the neat things fzf.vim does is make it easy to create new commands for fuzzy searches. If you’re like me, you probably have some absurd number of project repositories you keep around and jump to, as necessary. Not everything is in the same directory (e.g. ~/work/), naturally, and with a laptop, desktop, and a couple other machines the less-frequently used repos may be where one least expects them to be — or not present at all. It’s not hugely annoying, just a sort of mild pain to have to spend several extra seconds doing a fuzzy search manually, rather than having fzf do it. But we do have fzf, and it’s not difficult at all to build out a new search, so there’s really no reason to keep on inflicting that pain. ","date":"2018-02-24","objectID":"/2018/02/fast-project-finding-with-fzf/:0:0","tags":["vim","fzf"],"title":"Fast Project Finding With fzf","uri":"/2018/02/fast-project-finding-with-fzf/#"},{"categories":["Dev Tooling"],"collections":null,"content":"Create a :Projects command Let’s create a new command in my vimrc, :Projects, that invokes fzf to search through all the different work directories I have. command! -nargs=0 Projects \\ call fzf#run(fzf#wrap('projects', { \\ 'source': 'find ~/work ~/.vim/plugged -name .git -maxdepth 3 -printf ''%h\\n''', \\ 'sink': function('rsrchboy#fzf#FindOrOpenTab'), \\ 'options': '-m --prompt \"Projects\u003e \"', \\}, \u003cbang\u003e0)) What does this do? Defines a new vim command, :Projects No surprises here. Invokes fzf#run() to run a fzf search fzf#run() handles the actual execution and presentation of fzf, as well has dispatching the results back to the sink. fzf#wrap() is neat. It allows a command to take advantage of fzf.vim’s option handling – or not, by simply omitting it. Uses find to look for repositories We know roughly where to look(~/work/, ~/.vim/plugged) and how deep to look. Just about everything I do is backed by git, so we can look for repositories and return the parent of the found .git back to fzf. Note that the find invocation deliberately omits a -type d argument. I do use git worktrees, meaning .git may well be a file (a “gitlink”). Calls out to rsrchboy#fzf#FindOrOpenTab() with the project selected The sink option tells fzf#run() what to do with the results. In our case we have provided fzf#run() with a callback function, but you can also use built-ins as sinks. ","date":"2018-02-24","objectID":"/2018/02/fast-project-finding-with-fzf/:1:0","tags":["vim","fzf"],"title":"Fast Project Finding With fzf","uri":"/2018/02/fast-project-finding-with-fzf/#create-a-projects-command"},{"categories":["Dev Tooling"],"collections":null,"content":"The callback “sink” function fun! rsrchboy#fzf#FindOrOpenTab(work_dir) abort \" loop over our tabs, looking for one with a t:git_workdir matching our \" a:workdir; if found, change tab; if not fire up fzf again to find a file \" to open in the new tab for l:tab in (gettabinfo()) if get(l:tab.variables, 'git_workdir', '') ==# a:work_dir exe 'tabn ' . l:tab.tabnr return endif endfor call fzf#run(fzf#wrap('other-repo-git-ls', { \\ 'source': 'git ls-files', \\ 'dir': a:work_dir, \\ 'options': '--prompt \"GitFiles in ' . a:work_dir . '\u003e \"', \\ 'sink': 'tabe ', \\}, 0)) return endfun In general, I use one tab per project (repository) in vim. For me, this is a nice balance of utility and sanity. It also allows me to do things like set t:git_dir and t:git_workdir to the git and workdir, respectively, of the repository associated with the tab. Our callback function first attempts to find an open tab with the workdir requested; if found, it just switches to it and returns. (It should probably admonish me to read the tab line before invoking :Projects.) If not found. the callback function invokes fzf#run() again. This time we use git ls-files to generate the source list for fzf, allowing us to pick a file to be opened by the given sink: tabe. ","date":"2018-02-24","objectID":"/2018/02/fast-project-finding-with-fzf/:2:0","tags":["vim","fzf"],"title":"Fast Project Finding With fzf","uri":"/2018/02/fast-project-finding-with-fzf/#the-callback-sink-function"},{"categories":["Dev Tooling"],"collections":null,"content":"Hey, that wasn’t too hard! Easier than writing this post, I’d say ;) Happy hacking! ","date":"2018-02-24","objectID":"/2018/02/fast-project-finding-with-fzf/:2:1","tags":["vim","fzf"],"title":"Fast Project Finding With fzf","uri":"/2018/02/fast-project-finding-with-fzf/#hey-that-wasnt-too-hard"},{"categories":["Making Things Work"],"collections":null,"content":"Google DNS is being hardcoded into a significant number of devices now. Which is nice, because it pretty much always works. …except when you’re trying to use Netflix and you have a tunnelbroker IPv6 tunnel. Ugh. So, this is a brief followup to Stupid OpenWRT tricks. Or maybe “Getting Netflix to work when your ISP doesn’t support IPv6 yet” is a better way to put it… Anyways. In the previous post it I talked about how to use a local instance of bind to strip IPv6 addresses (AAAA records) from responses. (Again, I can’t take credit for that, though I like the way the person who came up with the idea thinks!) That solution works fabulously. …unless your device decides it’s going to ignore your DNS servers, and go hit up 8.8.8.8 or 8.8.4.4 (or 2001:4860:4860::8888 or 2001:4860:4860::8844) directly. That’s going to fail. Ugh. DNAT to the rescue! (Some NAT, like some cholesterol, moderate alcohol intake, and not staying up all night too often, is actually incredibly useful to downright fun. Particularly when staying up all night and the moderate alcohol intake are combined with writing ip6tables DNAT rules.) The problem is that we have clients bypassing our DNS in favor of servers out on the public Internet. Our solution? Find anything that’s headed in through our LAN interface (typically br-lan) and is headed to 53/UDP, and DNAT it so that it’s headed to our router’s LAN IP address. We don’t need to try to capture or reroute DNS traffic to 8.8.8.8 etc, because we don’t really want any of our clients doing direct DNS queries. (At least, I can’t think of a good reason.) OpenWRT makes this pretty easy. While the Network-\u003eFirewall-\u003eTraffic Rules page doesn’t support DNAT, it’s easy enough to craft a custom rule and plug it in on the not very deceptively named “Custom Rules” page. OpenWRT also has a rather nice setup of iptables chains, including ones for user-defined rules, so you can add rules without their being trashed every time the firewall is reloaded. For our purposes, this will do the trick: iptables -t nat -A prerouting_lan_rule \\ -p udp --dport 53 -j DNAT --to 192.168.1.1 \\ -m comment --comment 'dns capture and redirect DNAT' Note we’re using the user rule prerouting_lan_rule; this rule already only has packets coming in on br-lan, so we can omit the -i br-lan from our rule we’d otherwise need. Once you’ve saved this, you either need to reboot or just ssh into your router and run the command directly, and you should be able to watch Netflix again. You can run host netflix.com 8.8.8.8 from a client box to see that no AAAA records are returned. While we’re here, we should probably do this for IPv6 as well, just in case. First you’re going to need to install a couple additional packages: kmod-ipt-nat6, and if your LAN interface is a bridge you’ll also need kmod-ebtables-ipv6. Then this rule should do it: ip6tables -t nat -A PREROUTING \\ -i br-lan -p udp --dport 53 -j DNAT \\ --to 2001:470:XXXX:XXXX::1 \\ -m comment --comment 'dns capture and redirect DNAT' Note OpenWRT does not set up any chains in the IPv6 NAT table, because you should never use NAT in IPv6. Um, aside from this, naturally. Enjoy! ","date":"2017-02-13","objectID":"/2017/02/no-use-my-dns-really/:0:0","tags":["dnat","dns","iptables","ipv4","ipv6","nat","netflix","openwrt","tunnelbroker"],"title":"No, use *my* DNS.  (aka Netflix vs tunnelbroker.net)","uri":"/2017/02/no-use-my-dns-really/#"},{"categories":null,"collections":null,"content":"So, if you’re like me you find yourself wondering why your broadband provider has a /32 IPv6 prefix assigned, and yet chooses not to use it, forcing one to either be IPv4-only (how 20’th century) or use an IPv6-over-IPv4 tunnel solution. Fortunately there is a simple and free solution out there, courtesy of Hurricane Electric’s rather fabulous tunnelbroker service. Obtaining an IPv6 prefix and setting up the tunnel is covered, extensively, so I won’t go into it. It’s also rather easy to set the tunnel up on an OpenWRT based router, like mine. The default setup is rather nice, but there are some changes you can make to your router configuration that will make it even nicer. ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:0:0","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#"},{"categories":null,"collections":null,"content":"Remove the “ULA Prefix” OpenWRT creates, by default, a ULA prefix – a deprecated “site-local” prefix. While these are perfectly valid, I’ve found that non-globally routable IPv6 addresses tends to confuse the heck out of Android-based phones, resulting in certain operations taking forever while various network operations time out, and are then retried with globally routable addresses. They’re also pointless, as we don’t do IPv6 NAT (don’t even think it), so just remove it. Your phone will thank you. ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:1:0","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#remove-the-ula-prefix"},{"categories":null,"collections":null,"content":"A note about firewalls It’s worth repeating: we don’t do IPv6 NAT. Assuming you’ve removed the ULA prefix, every non-link-local IPv6 address assigned will be globally routable, meaning, among other things, that you can’t just rely on NAT to be your firewall, you’ll actually have to use your router as a firewall as well. This is also well documented, and left as an exercise for the reader. …one I rather suspect you’ve already completed, as, well, you’re using OpenWRT, aren’t you? ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:1:1","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#a-note-about-firewalls"},{"categories":null,"collections":null,"content":"More than one network? Get a /48! By default, HE will give you a /64 routed prefix: this is the pool of addresses your LAN-connected devices will draw from. If you ask – that is, hit the “assign /48” button on your tunnel’s configuration page – HE will also give you a /48. Why would you do this? Well, while you can subdivide your /64 up and route it however you want, most IPv6 tech presumes the smallest network it will ever encounter is a /64. If you choose to, say, make your wired and wireless networks distinct and route rather than bridge between the two the canonical approach is to use one /64 for the wired network, and a second, different /64 for your wireless. (The same logic applies if you wish to also delegate prefixes to hosts on your network – say a /64 to some box you have running a bunch of VM’s or docker containers on.) But how to set this up easily? Remember that “ULA prefix” option, above? Just put the /48 prefix HE assigned you in there, and everything will Just Work. Delegating specific /64’s to interfaces can be done with “hints” in the interface configuration, and each internal interface will receive a /64 from your /48 automatically. Yes, this means at least one of your internal networks will have two /64 prefixes addresses can be assigned/chosen from. Don’t sweat it: your device should pick up an address from each /64, and things will Just Work. ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:2:0","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#more-than-one-network-get-a-48"},{"categories":null,"collections":null,"content":"Hostnames! OpenWRT uses dnsmasq to provide DNS, and because of this we can do some neat things. If you edit your /etc/dnsmasq.conf appropriately, you can get: ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:3:0","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#hostnames"},{"categories":null,"collections":null,"content":"Hostnames for the ips assigned to our interfaces, automatically # hostnames for our interface ips! interface-name=wan.router,eth0 interface-name=wan.router,6in4-henet interface-name=lan.router,br-lan …yielding: $ host lan.router lan.router has address 192.168.1.1 lan.router has IPv6 address 2001:470:XXXX:1::1 lan.router has IPv6 address 2001:470:1f11:XXXX::1 ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:3:1","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#hostnames-for-the-ips-assigned-to-our-interfaces-automatically"},{"categories":null,"collections":null,"content":"“Synthetic” hostnames That is, a deterministic hostname for every ip on a given subnet that dnsmasq doesn’t already know a hostname for. synth-domain=ip.lan,192.168.1.0/24 synth-domain=ip.lan,2001:470:1f11:XXXX::/64 # Apparently /48 breaks dnsmasq more than a bit #synth-domain=ip.lan,2001:470:XXXX::/48 synth-domain=ip.lan,2001:470:XXXX:1::/64 …yielding: $ host 2001:470:1f11:XXX::2 2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.X.X.X.X.1.1.f.1.0.7.4.0.1.0.0.2.ip6.arpa domain name pointer 2001-470-1f11-XXXX--2.ip.lan. Note this “synthetic hostname” will only be returned if dnsmasq lacks a better name, e.g.: $ host 2001:470:1f11:XXX::1 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.X.X.X.X.1.1.f.1.0.7.4.0.1.0.0.2.ip6.arpa domain name pointer lan.router. ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:3:2","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#synthetic-hostnames"},{"categories":null,"collections":null,"content":"AAAA records While OpenWRT does not use dnsmasq for router advertisements, we can still use it’s rather nifty “match info from DHCPv4 requests against the DID/MAC the device would use for SLAAC” functionality to enable it to return both A (IPv4) and AAAA (IPv6) records when asked for an internal hostname: $ host mfc.lan mfc.lan has address 192.168.1.78 mfc.lan has IPv6 address 2001:470:... $ ping6 mfc.lan PING mfc.lan(mfc.lan) 56 data bytes 64 bytes from mfc.lan: icmp_seq=1 ttl=64 time=0.490 ms 64 bytes from mfc.lan: icmp_seq=2 ttl=64 time=46.5 ms ... Enable with the somewhat cryptic: # serve AAAA records based off DID/MAC and DHCPv4 requests dhcp-range=::,constructor:br-lan,ra-names …or some permutation thereof, if you’ve altered the topology of your internal network. ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:3:3","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#aaaa-records"},{"categories":null,"collections":null,"content":"Strip AAAA records for Netflix Sigh. Yes, apparently we need to do this. Lame. Basically, Netflix thinks it doesn’t know where you are located, so will refuse to serve content to any address that HE has delegated to you via tunnelbroker (e.g. your /64, /48). Fortunately, this is a problem others have already elegantly resolved: OpenWRT workaround against Netflix blocking IPv6 requests from tunnel brokers. Basically, the solution is to build and install a bind package that supports the strip-aaaa option, then have dnsmasq delegate any lookups for *.netflix.com to the bind server. Clean, simple, and easily extensible to any other service that may choose to do the same thing in the future. With this hack in place, without impacting any other domain your netflix.com lookups will go from this: $ host netflix.com 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: netflix.com has address 52.45.218.113 netflix.com has address 52.23.189.13 netflix.com has address 52.207.111.144 netflix.com has address 52.206.68.176 netflix.com has address 52.205.89.26 netflix.com has address 52.54.2.184 netflix.com has address 52.7.207.34 netflix.com has address 52.71.64.222 netflix.com has IPv6 address 2406:da00:ff00::34cf:6f90 netflix.com has IPv6 address 2406:da00:ff00::34ca:21dd netflix.com has IPv6 address 2406:da00:ff00::34c8:ef2b netflix.com has IPv6 address 2406:da00:ff00::36a4:da76 netflix.com has IPv6 address 2406:da00:ff00::34ce:44b0 netflix.com has IPv6 address 2406:da00:ff00::3655:55f6 netflix.com has IPv6 address 2406:da00:ff00::34cb:53f8 netflix.com has IPv6 address 2406:da00:ff00::34cd:591a …to this: $ host netflix.com netflix.com has address 52.54.2.184 netflix.com has address 52.71.64.222 netflix.com has address 52.205.89.26 netflix.com has address 52.207.111.144 netflix.com has address 52.206.68.176 netflix.com has address 52.7.207.34 netflix.com has address 52.45.218.113 netflix.com has address 52.23.189.13 These are little things, yes (hence the “stupid … tricks” appellation), but every little bit helps when trying to figure out a problem. …or, you know, appease Netflix :) Enjoy! ","date":"2017-01-18","objectID":"/2017/01/stupid-openwrt-ipv6-tricks/:4:0","tags":["bind","dns","dnsmasq","ipv6","netflix","openwrt","stupid-tricks","synthetic-hostnames"],"title":"Stupid OpenWRT ipv6 tricks","uri":"/2017/01/stupid-openwrt-ipv6-tricks/#strip-aaaa-records-for-netflix"},{"categories":["Perl"],"collections":null,"content":"One of the most dangerous books I’ve ever even partially read is MJD’s Higher Order Perl. In particular, its description of subroutine currying – that is, building more specific functions out of more general purpose ones – is a pattern I find incredibly useful. The other day I found myself writing a number of routines that were surprisingly similar… kinda. They all implemented a common pattern, but across routines that were rather… different. I found myself wistfully longing for the familiar pattern of currying, and then realized – I’m working in PERL, DAMNIT. sub validate_thing { _validate_subtest_wrapper(\\\u0026_validate_thing_guts, @_) } sub validate_class { _validate_subtest_wrapper(\\\u0026_validate_class_guts, @_) } sub validate_role { _validate_subtest_wrapper(\\\u0026_validate_role_guts, @_) } sub _validate_subtest_wrapper { my ($func, $func, $func, $thing, %args) = @_; # note incrementing by 2 because of our upper curried function local $Test::Builder::Level = $Test::Builder::Level + 2; # run tests w/o a subtest wrapper... return $func-\u003e($thing =\u003e %args) unless $args{-subtest}; # ...or with one. return $tb-\u003esubtest(delete $args{-subtest} =\u003e sub { $func-\u003e($thing =\u003e %args) }); } This is part of recent work of mine, extending Test::Moose::More to use subtests where they make sense. Here I was able to curry one function – _validate_subtest_wrapper() – by passing it a reference to another function, that it then invokes. Excellent. Life is easier, as it should be. ","date":"2015-07-29","objectID":"/2015/07/currying-patterns/:0:0","tags":["perl","currying","higher-order-perl"],"title":"Currying Patterns","uri":"/2015/07/currying-patterns/#"},{"categories":["Perl"],"collections":null,"content":"I just released MooseX::AttributeShortcuts 0.028; it incorporates Moo-style type constraints. …largely because I needed to relax, and wrote MooseX::Meta::TypeConstraint::Mooish :) That means you can now pass a coderef to has() in isa that, like with Moo, dies on validation failure and lives on validation success: # easiest is via AttributeShortcuts use MooseX::AttributeShortcuts 0.028; has foo =\u003e ( is =\u003e 'rw', # $_[0] == the value to be validated isa =\u003e sub { die unless $_[0] == 5 }, ); ","date":"2015-04-08","objectID":"/2015/04/mxas-and-moo-constraints/:0:0","tags":["perl","moo","moose","mxas"],"title":"MX::AttributeShortcuts -- now with Moo-style type constraints","uri":"/2015/04/mxas-and-moo-constraints/#"},{"categories":null,"collections":null,"content":"This requires a little magic, unfortunately; either the driver, system, hardware itself, or some combination thereof do not operate well with autosuspend enabled. Disabling autosuspend for this device does appear to resolve dropped / corrupted / weird bluetooth issues. Based on my googling, I do not believe this to be Thinkpad-specific, rather something the Intel 7260AC firmware isn’t handling properly at the moment. FWIW, I’m running Ubuntu 13.10 (saucy) on the thinkpad in question, and 12.04LTS (precise) on my destop, with the same card sold by Intel in a PCI-e mount. Based on one posting in particular, the following solution presents itself: This isn’t ideal – as it should Just Work – but it works, and is certainly less drastic than turning off USB autosuspend globally. In retrospect, having the bluetooth device drop out shortly after boot/resume, but always be available after resuming, was a big clue. ","date":"2014-03-24","objectID":"/2014/03/intel-7260ac-bluetooth-808707dc-ubuntu-and-the-thinkpad-t440p/:0:0","tags":["thinkpad","T440p","bluetooth","precise","saucy","8087:07dc","hardware","Intel 7260AC"],"title":"Intel 7260AC Bluetooth [8087:07dc], Ubuntu, and the Thinkpad T440p","uri":"/2014/03/intel-7260ac-bluetooth-808707dc-ubuntu-and-the-thinkpad-t440p/#"},{"categories":null,"collections":null,"content":"Never, ever update on a Friday. GitHub: Automattic/jetpack #284: use content, attribute 0 -or- ‘id’ le sigh ","date":"2014-02-28","objectID":"/2014/02/never-upgrade-on-a-friday/:0:0","tags":["things-never-to-do"],"title":"Never, ever update on a Friday.","uri":"/2014/02/never-upgrade-on-a-friday/#"},{"categories":null,"collections":null,"content":"Just a little snippet. This should be pretty obvious to those familiar with how sudo functions, but it’s easy to run docker commands with sudo without being prompted for your password by configuring sudo to not ask for it. Note that the normal warnings and red flags apply here. If you install the above as /etc/sudoers.d/docker, then the user rsrchboy (line 1) and any user in the docker group (line 2) will not be asked for a password when running “sudo docker …”. Again, the normal warnings and red flags apply here. ","date":"2014-02-20","objectID":"/2014/02/docker-without-password-prompting/:0:0","tags":["docker","sudo"],"title":"docker without password prompting","uri":"/2014/02/docker-without-password-prompting/#"},{"categories":["Dev Tooling"],"collections":null,"content":"Sometimes it’s necessary to – for one’s sanity, if nothing else – to establish a set of generally non-controversial, sane, system-wide git configuration defaults. This is largely helpful when multiple people are using the same system who may not have a standard ~/.gitconfig they carry around with them. To do this, we can leverage the little-used system-wide git config file at /etc/gitconfig. Remember that by default git looks at three files to determine its configuration (in ascending order of priority): /etc/gitconfig (system), ~/.gitconfig (user aka global), and .git/config (configuration for the current repository). This allows us to set defaults in the system configuration file without interfering with people who prefer different settings: their global config at ~/.gitconfig will win. This config sets a couple safer defaults for pushing, makes git merge/diff/rebase a little more DWIM, causes the committer, as well as the author, information to be displayed by default, as well as allowing for an easy way to override the system config on a per-system basis. (In case, say, you’re using puppet or the like to distribute this configuration across multiple hosts.) And… As with all things “generally non-controversial”, remember that these are the sorts of things likely to touch off religious wars. The goal here is for a sane set of defaults for all users, not The One True Way To Do GIT. That’s what user global configs are for :) ","date":"2014-02-20","objectID":"/2014/02/useful-git-defaults-systemwide/:0:0","tags":["git","sanity"],"title":"Useful git defaults -- systemwide","uri":"/2014/02/useful-git-defaults-systemwide/#"},{"categories":["Dev Tooling"],"collections":null,"content":"I use screen with vim. One of the things I like about vim is that, much like unix itself, I’m always discovering useful new features, even after years of use. Recently, I’ve been using tabs in vim to complement window regions. I’ve found it pretty useful, as there are times I’d want to keep certain tasks on one tab but not another. e.g. different source files open in windows on one tab; a test file + vim-pipe buffer showing the rest. While I’m not using screen to change between multiple vim sessions in the same project anymore, I still use it pretty much everywhere: it’s there, and sometimes a wireless network isn’t. (Or you’re working one place and need to pack up and move to another place.) screen preserves your working sessions, so you don’t have to get everything “just right” again. Unfortunately, screen seems to mangle the C-PgUp and C-PgDn commands vim gives as default shortcuts to switch between tabs. Leaving out that these key sequences are also used at the windowing level to switch tabs, it turns out that screen was mangling them on the way through to vim, so vim didn’t see C-PgUp, for instance, it saw some other sequence. Adding this to your .vimrc will cause vim to recognize the sequence it sees when running under screen: ","date":"2012-11-18","objectID":"/2012/11/screen-vim-tabs-and-c-pgupc-pgdn-mappings/:0:0","tags":["screen","vim"],"title":"screen, vim, tabs, and C-PgUp/C-PgDn mappings","uri":"/2012/11/screen-vim-tabs-and-c-pgupc-pgdn-mappings/#"},{"categories":["Perl"],"collections":null,"content":"I’m in lovely Madison, WI right now, and will be headed over to my first YAPC::NA tomorrow. The first couple days are the hackathon, at which I think I’m going to work on a Dist::Zilla::Role::Stash to hold repository related information. There are a bunch of Git related plugins for Dist::Zilla, a couple that I maintain, and a lot of code is duplicated between them; a stash should resolve that. I hope to meet you all there! :) ","date":"2012-06-10","objectID":"/2012/06/my-first-yapcna/:0:0","tags":["yapc","perl"],"title":"My first YAPC::NA!","uri":"/2012/06/my-first-yapcna/#"},{"categories":null,"collections":null,"content":"I’ve seen a couple references lately to using lazy attributes as a form of caching. This is a great approach to thinking about lazy attributes, as they share a number of characteristics with traditional caching: you only have to build a (potentially) expensive value once, and then only when you actually need it. But what about when that lazily generated value is too old to trust? A lazy attribute isn’t going to help you much then, as your instance is quite happy to keep on returning the same value forever once it has been built, unless you clear or change it manually. This is no good when, say, you’ve run a database query and you can really only expect your painfully contorted query to get the twitter ids of all the left handed Justin Beiber fans north of the Mason-Dixon line who own hypo-allergenic cats to be valid for, oh, say 55 minutes or so. You could add an attribute to store the age of the value generated for the lazy attribute and check it either manually (boring!), or by wrapping the reader method (less boring, but still, unsightly). Ok, method modifiers can be fun, but still… That’s a lot of annoying little code that, well, isn’t Moose there to help reduce that sort of code in our lives? What we’re running into here is that while we implement one part of a cache (generate once, return many), lazy attributes don’t have any internal logic to determine when a value is no longer good. They don’t even have any concept of that, just “someone needed my value, so we’re going to get it and hang on to it until told otherwise”. This is just the sort of behaviour an attribute trait can alter. The MooseX::AutoDestruct Moose attribute trait allows us to specify an expiration date for our stored values. We can specify a time-to-live option at attribute creation, and then every time a value is set, the set time is stored. Every time the value is accessed, the attribute checks to make sure the value isn’t older than the set time to live, and if it is, clears the value. This allows the lazy value generation to kick in once more, without requiring any extra effort on the part of the user – just as one would expect. ","date":"2012-05-25","objectID":"/2012/05/cheap-caching-with-autodestruct/:0:0","tags":["moose","perl","things-that-go-boom","caching"],"title":"Cheap Caching with AutoDestruct","uri":"/2012/05/cheap-caching-with-autodestruct/#"},{"categories":null,"collections":null,"content":"One thing that that had been particularly annoying me lately, was the ridiculously long package names called for in a certain project of mine. The package names themselves weren’t the problem, it was writing them out. With filenames like lib/App/MediaTracker/TemplateFor/Browser/PrivatePath/libraries/things/document.pm, the corresponding package names become very long and very painful very quickly. Fortunately, I use vim as my editor of choice. Along with the fantastic snipmate vim plugin vim plugin it is possible to create a snippet that runs a little vim code as part of it: The snippet should be stored in ~/.vim/snippets/perl.snippets, unless you have things arranged otherwise. Now, assuming that my filename and package name are the typical parallels, simply typing pkg\u003cTab\u003e will create a proper package line for me, with an absolute minimum of pain. :) ","date":"2012-05-20","objectID":"/2012/05/vim-snippet-to-generate-package-name-from-the-filename/:0:0","tags":["vim","snippets","perl"],"title":"Vim Snippet to Generate Package Name From The Filename","uri":"/2012/05/vim-snippet-to-generate-package-name-from-the-filename/#"},{"categories":null,"collections":null,"content":"Lazy attributes are wonderful. They allow us to postpone generating attribute values for any number of reasons: it’s expensive and we don’t want to do it unless we need it, it should be initialized after instantiation because it depends on other attributes, etc. And it does this without our having to worry about the value being around: if we need it, it’ll be generated on the fly without any extra effort on our part. As an example, let’s say we have a simple config file that defines key/value pairs. We need to find out the author’s name, which has the key ‘author’ in the config file. We could create a lazy attribute as such: Simple, yes? Now, whenever you need the authors name, you have it. So, let’s now say that a couple days later, you realize that you also need to get the author’s email from the config file (key ’email’): Voila! Except… Hm. We’re now loading and parsing the config file twice. Though it’s likely to be very low cost to do that (assuming a local, simple config file on the filesystem), it still feels wrong. Besides, what happens when you run into a situation like this and the base set of data (e.g. what load_config() is returning) is expensive to generate? There are a couple things we could do here: we could create a config attribute, make it lazy and load the config, then change our attributes to pull their value out of the config attribute; we could create a new class to handle the config, and setup a config attribute that delegates to it; etc, etc. That’s a lot of work, however, and work that doesn’t need to be done if we leverage other parts of Moose correctly. One of the easy, often overlooked ways to do this is to use the tools Moose itself gives us: native attribute traits and accessor currying. In the above, we see one attribute being created. Note the is =\u003e 'bare'; this keeps the attribute from generating the reader, writer or accessor methods. We’re applying the “Hash” native trait, and using the delegation it provides to create custom accessors that pull from the hash without needing the end user to provide keys to a generic lookup. Note that with either of these approaches gives us the same interface to someone using our class: This isn’t always appropriate, but if you ever find yourself with multiple attributes whose values can all be generated through one builder, then this may be a good starting approach. It’s certainly the laziest one :) ","date":"2012-05-08","objectID":"/2012/05/simulating-multiple-lazy-attributes/:0:0","tags":["moose","attributes","mxas","lazy","traits"],"title":"Simulating multiple, lazy attributes","uri":"/2012/05/simulating-multiple-lazy-attributes/#"},{"categories":["Dev Tooling"],"collections":null,"content":"A while back, I wrote about “useful git defaults”. This is a tricky subject, as a sufficiently aged ~/.gitconfig is much like a vimrc or Chief O’Brien’s rank: a very religious topic. Nonetheless, it’s one of those things where a few small adjustments to the system-wide git configuration (a la /etc/gitconfig) can make things much, much easier — particularly in the case where there are multiple systems to manage, and multiple people using them. I’m pretty happy with those defaults, but a lot has changed since 2014. ","date":"0001-01-01","objectID":"/1/01/useful-git-defaults-revisited/:0:0","tags":["git","sanity"],"title":"Useful systemwide git defaults -- revisited","uri":"/1/01/useful-git-defaults-revisited/#"},{"categories":["Dev Tooling"],"collections":null,"content":"git config file locations The configuration paths available have also changed: To do this, we can leverage the little-used system-wide git config file at /etc/gitconfig. Remember that by default git looks at four files to determine its configuration (in ascending order of priority): /etc/gitconfig (system), ~/.config/git/config, ~/.gitconfig (user aka global), and .git/config (configuration for the current repository). (Technically, #2 is $XDG_CONFIG_HOME/git/config.) This allows us to set defaults in the system configuration file without interfering with people who prefer different settings: their global config at ~/.gitconfig will win. ","date":"0001-01-01","objectID":"/1/01/useful-git-defaults-revisited/:0:0","tags":["git","sanity"],"title":"Useful systemwide git defaults -- revisited","uri":"/1/01/useful-git-defaults-revisited/#git-config-file-locations"},{"categories":["Dev Tooling"],"collections":null,"content":"/etc/gitconfig For our purposes, we’re talking about settings in /etc/gitconfig, though they can certainly be used in other places as well. This config sets a couple safer defaults for pushing, makes git merge/diff/rebase a little more DWIM, causes the committer, as well as the author, information to be displayed by default, as well as allowing for an easy way to override the system config on a per-system basis. (In case, say, you’re using puppet or the like to distribute this configuration across multiple hosts.) Note that we do not do some things that individuals may wish to do, as we’re aiming for “unobtrusive, reasonable universal defaults”, e.g. rebase.autosquash is not set to true. (Though the author highly recommends this setting.) ","date":"0001-01-01","objectID":"/1/01/useful-git-defaults-revisited/:0:0","tags":["git","sanity"],"title":"Useful systemwide git defaults -- revisited","uri":"/1/01/useful-git-defaults-revisited/#etcgitconfig"}]