Remaining more present by controlling information overload

A while back, I was made well aware of my, shall we say, un-presence. I’d be physically present with my family, but distracted the point of absence. Usually, it was due to a device stealing my time from them. The phone in my pocket was becoming a time-and-attention thief.

A Facebook message. A Twitter reply. An email.

I am still not great at this — my kids would probably say I’m terrible — but I’ve come a long way and thought I’d share some of my tactics for helping reduce social media and email overload.

The point of this is to show how I use friction and delay to help keep myself from being pulled out of the present moment when it comes to notifications that would otherwise constantly distract me.

In short:

  1. Using web apps, not native apps
  2. Turning off most email notifications
  3. Rolling up the email notifications that remain

Using web apps, not native apps

Heresy: when I can, I use web apps instead of native mobile apps. This is first and foremost a matter of principle: I believe in the browser as a platform and the ability to produce rich, accessible applications in standard technologies such as HTML, CSS, and JavaScript. So, in support of this principle, I will opt to use a web application over a native device application unless the native application offers compelling features that I want.

And here’s a feature I don’t want: device notifications. If I get a Facebook comment or twitter mention, I don’t care to know about it right away.

By using web apps, I am forced to go to there rather than having that information pushed to me. In other words, I use that little bit of friction to my advantage.

Disabling notifications

For nearly all social media — Twitter, Facebook, etc — I disable most email notifications. Facebook PMs and likes and comments… nope. Twitter follows… nope. I force myself to go that information; I don’t want it pushing to me.

The combination of using web apps, and disabling notifications, means that I am often hours, if not days, behind on interactions on these platforms, by choice.

Rolling up the email notifications that remain

I still get a fair amount of non-personal (i.e. person-to-person) email — newsletters, LinkedIn, GitHub, meetups, etc.

To stop those from distracting me as soon as I receive them, I use a free service called Unroll.me to manage that email. This service watches my inbox, and when it detects something that looks automated, it asks whether I want to unsubscribe, roll it up into a daily email, or keep it in the inbox.

After almost a year of using this service, I’ve unsubscribed from a lot of things I don’t care about, and rolled up many disparate communications into a daily email. I cannot recommend this service highly enough. (Caveat: it has access to all your email.)

Conclusion

How we interact with the digital world is a highly personal affair. The choices I’ve described above — to ultimately favor the present moment vs interacting with people digitally at all hours of day and night except when I explicitly choose to do so — clearly come with tradeoffs. I am slower to respond to people. I am probably not servicing certain relationships as well as I should. When I do interact on those services, it’s slightly less convenient.

But the real-time web — powerful and important as it may be — can also be a bug and not a feature, especially when it comes to interactions that, perhaps, aren’t urgent. In these cases, I give myself permission not to participate in real time.

The way I think about it is that these choices help me to favor the people I am with, right here, right now. And I cannot do that effectively if there’s a constant trickle of requests for attention coming from across the ether.

 

 

 

Jenkins-as-code: creating jobs from the command line during development

In the previous post in this series, I covered how to make a seed job for seed jobs via registration. In this post, I’ll cover my favorite development-time helper: running job scripts from the command line.

The problem

As our team was working on adopting this jenkins-as-code solution, I confess that local development was suboptimal. Because seed jobs pull from source control and then process job dsls, we’d have to build the dsl scripts, push to git, run the seed job, and run the built jobs, repeating as necessary till the job was working as desired. The workflow was slow enough to be annoying. Because, let’s face it, how often do you really get your automations right on the first try?

The solution

The solution ended up serendipitously coming from something Irina had built for the registration job… a way to “recover” that job on a Jenkins should something unfortunate happen to it or to easily put that job onto a new Jenkins without manually configuring it. The idea was to be able to do the following from the command line, given that a dsl script called recover_mother_job.groovy existed and whose purpose was to rebuild the mother-seed-job:

./gradlew rest -Dpattern=jobs/recover_mother_job.groovy -DbaseUrl=http://our-jenkins/ -Dusername=foo -Dpassword=bar

She found it in this bit of gold from the job-dsl folks.

Broader applicability

Eventually, it hit us that if we could use it for the mother seed job, why not for any job?

The workflow would look like this:

  1. You’ve got a target Jenkins server, either local or remote
  2. You’re editing your dsl script locally
  3. You want to quickly build that on the target Jenkins server, without pushing to SCM yet
  4. You run a single command in your terminal, then go check out the job in your target Jenkins
  5. Once you’re satisfied, you push to SCM

So we copied-and-slightly-modified Matt Sheehan’s rest runner into our jenkins-automation project, and then added the rest task into our starter project, which we use for the basis of all our jenkins job repositories internally.

Try it out

Here’s what I want you to do right now:

  1. Identify an existing local or remote Jenkins to run this against
  2. Clone https://github.com/cfpb/jenkins-as-code-starter-project.git locally
  3. cd jenkins-as-code-starter-project/
  4. create a new file in the “jobs” directory called “simple.groovy” and give it these contents:
job('hello-world-dsl') {
   steps {
     shell("echo 'Hello World!'")
   }
}

Now, from that same directory, run:

./gradlew rest -DbaseUrl=http://localhost:8080 -Dpattern=jobs/simple.groovy

If you need to pass credentials, also pass

-Dusername=<username> -Dpassword=<password>

You should see output  like this:

:compileJava UP-TO-DATE
:compileGroovy
:processResources UP-TO-DATE
:classes
:rest

processing file: jenkins-as-code-starter-project/jobs/simple.groovy
Processing provided DSL script
hello-world-dsl - created

Go to your jenkins, and behold your new glorious ‘hello-world-dsl’ job.

Now, keep playing with that job in your edit. Or add more jobs to that file (or other files). Run that command over and over, and see your job updated in Jenkins. This is about as close to an “F5 Refresh” Jenkins job development workflow as I’ve found.

Warnings

A few caveats:

  1. Don’t get in the bad habit of only running jobs from the terminal. Commit them to SCM and let seed jobs do the real work
  2. If you’re running against a Jenkins with default CSRF crumb protection, this won’t work
  3. The latest version of job-dsl-plugin has an “auto-generated dsl” feature to replace the need for manually adding new DSL APIs for plugins; this will flat-out break this method of job creation. For that reason, we have not adopted into our code any of the auto-generated bits and are sticking, for now, with either APIs that exist or with configure blocks, because that’s how important this tool has become (to me, anyway)

Next up: what about Workflow/Pipeline and Jenkinsfile?

The Workflow plugin, aka Pipelines, is standard as of Jenkins 2.0. It, also, is a groovy DSL approach to configuring automations.

In the next post, I’ll cover the differences between job-dsl and Pipelines, and how I currently see the two living together in the Jenkins ecosystem

Jenkins-as-code: registering jobs for automatic seed job creation

In the previous post in this series, I covered custom builders we’ve added on top of job-dsl-plugin. In this post, I’ll cover an innovative solution that’s made working with all these Jenkins-as-code repositories much easier. It centers around a seed job for seed jobs.

What are seed jobs again?

The job-dsl-plugin tutorial covers them thoroughly. In short, a seed job processes your job-dsl scripts and thus creates the Jenkins jobs from those scripts using the “Process Job DSLs” build step. You generally configure your seed jobs to listen for changes to the source code repo where you keep your jobs, such that your jobs are automatically rebuilt whenever changes occur.

The problem

I’ve mentioned “internal Jenkins-as-code repositories” several times throughout past posts in this series. Rather than have all our Jenkins jobs in a single monolithic repository, we believe that projects / project teams should be able to keep their jobs in repo locations that make the most sense to them, maybe even sitting right inside their main project repo (think: .travis.yml or Jenkinsfile, but on steroids). That does impose several burdens, however:

  1. finding those Jenkins job repositories
  2. manually configuring their seed job in all the Jenkinses in which those project’s jobs should run

In an organization of any size, discovery is a problem, and we anticipate dozens (hundreds?) of Jenkins job repositories scattered across the organization. And the whole point of jenkins-as-code is to get out of the pointy-clicky job creation business, so it violates the principle of the initiative to require manual seed job configuration.

Just for now, imagine that you have 20 projects. They all have their own jobs. You’ve tried like hell to de-snowflake them so that they’re practically automatable or templatable, but you’re still manually pointing-and-clicking all those jobs. Updates are a hassle (even with scriptler). So you go the job-dsl route as described in previous posts, you script your jobs, but now you still have to create the seed jobs for all those projects. Or you have a gigantic single seed job that listens to all those repos (a totally legit approach). Either way, you don’t want to do much pointing and clicking to create seed jobs.

We solve that via a Registration pattern.

Registering job repositories

We haven’t yet open sourced this code (we will), so I’ll provide all the meaningful bits in this post for now.

The idea is simple: a single configuration file that teams can add their Jobs repository to via pull request. Once merged, a “Mother Seed Job” runs and then creates a seed job for their jobs. It’s a seed job for seed jobs.

Even after reading the problem statement above, you might still be thinking: you really need that? Like, can’t you just manually configured some seed jobs and move on? I thought the same thing. And in your org, maybe you don’t! It has turned out to really benefit us, though. More on that doubt, and trust, at the end of this post because it’s a good story.

Configuration file

Let’s start with the configuration file we use. It’s kept in an internal git repo, and additions are made via pull request. It looks something like this:

We have multiple hosts (the above are fake), and we split those into traditional dev/test/prod/etc environments. Some repos will only ever have automations that live in one host/environment combination, and some with multiple, so we account for those possibilities in this configuration.

The idea is simple: you put your repo with job-dsl jobs in here, based on what Jenkinses you want your jobs to be created in.

“all” is a special case that means what you’d expect: create this repo’s jobs in all the Jenkinses for this host.

Mother seed job

We then have the “Mother Seed Job” configured in all our Jenkinses. Here’s where it gets trippy: the mother seed job is itself a straightforward Freestyle job that runs a job-dsl script.The mother job pulls from 2 git repos: the jenkins-automation repo I talked about last time, and an internal git repo with configuration like above (and, currently, this seed job script below, with some other stuff). All the interesting bits are in that job-dsl script.

I am reproducing it here, in full:

This then creates seed jobs for each repo configured in the config file above, in the appropriate Jenkinses.

What’s the global variables stuff?

If you actually read that closely, you also noticed some global variables business going on. In short, we also use this seed job to generate a “globals.properties” file that we put on each Jenkins, so that we don’t have to manually manage global vars in Jenkins. That’s just another config file, which looks something like this:

 

How does a Jenkins identify itself?

We have these config files that have a host/environment matrix, and a mother seed job that creates seed jobs based on configuration. So how do each of the Jenkinses know which jobs are theirs?

Two global variables that we manage outside of jenkins-as-code (via configuration management or manual global var config in Jenkins):

  • JAC_HOST
  • JAC_ENVIRONMENT

Those 2 global variables exist in all of our jenkinses, and they are unique.

For example, for our Jenkinses in AWS, their JAC_HOST value will be “aws”. Then, because we only have one Jenkins master per environment, the JAC_ENVIRONMENT will be “dev”, “test”, “stage”, or “prod”.

The JAC_HOST and JAC_ENVIRONMENT values correlate exactly to the keys (aws, heroku, azure… dev, test, prod) in those config files.

There’s nothing special about these host or environment names; all that matters is that those two vars configured in Jenkins align with their corresponding names in the config files.

(disclaimer: I’m using aws, heroku, and azure as examples. I cannot publicly speak to our infrastructure)

Short story about trust

Irina M is ultimately responsible for leading design and implementation of the solution I’ve been writing about in this series. At one point, when work was winding down, she said she was working on a “mother seed job” (her words) to make it easier for teams to get their job repos in our Jenkinses. I told her then, more than once, that I didn’t really understand the problem.

She patiently explained it, but I still didn’t grok it as an actual-actual problem. But here’s the thing: as a team lead, people join your team and they’re smarter than you and you have to trust them. Sure, you can be skeptical, but you have to be confident enough to let go of control. For a lot of people, trust, and letting go, is not easy. It’s not for me.

This is one in a countless number of reminders of how often that trust pays off. Had I said “No, it’s not worth it, please spend your time on other things” we would not have had this solution, Irina would’ve rightfully been disengaged, and on and on.

The enlightened among you are thinking, “Duh, dumbass”. If you’re not thinking that, I implore you: put your ego down and trust your team.

Next up: my favorite development helper

As our team was working on adopting this jenkins-as-code solution, I confess that local development was kind of a pain in the ass. Because seed jobs pull from source control and then process job dsls, we’d have to build the dsl scripts, push to git, run the seed job, and run the built jobs, repeating as necessary till the job was working as desired. The workflow was juuust slow enough to be annoying.

We solve that by adapting a command line utility, a tiny nugget of gold, unassumingly linked in the job-dsl-plugin wiki. And now, local development of job-dsl scripts is, for me, a pleasure. More on that in the next post.

Jenkins-as-code: creating reusable builders

In the first post in this series, I covered the problems we were having with job creation and maintenance and a very high level look at our solution. In the second post, I covered the job-dsl-plugin which underpins our solution.

In this post, I’ll dig into the Builders we’ve added on top of job-dsl-plugin, which do the following for us

  • add sensible defaults for all jobs
  • make it easy to do the right thing for potentially complicated jobs or configurations
  • provide utilities for common decision-making and code blocks
  • establish a Builders pattern for use internally even if a job type isn’t all that reusable across our projects

Why am I writing about these?

While I think it’d be great if others were to take advantage of our jenkins-automation repo, that’s not my motivation for writing about our customizations. Rather, I want to use them as an example of what I consider to be sensible extensions of on top of job-dsl-plugin that we’ve found well worth the investment, for the reasons listed above.

Reusable builders

job-dsl-plugin is Groovy-based, and the domain specific language (DSL) used to create jobs via text is Groovy. Thus it’s natural to follow additional groovy idioms when building on top of job-dsl-plugin. Builders is one of those idioms.

Essentially, these builders are just wrappers around job-dsl. At their most basic, they provide sensible defaults.

Builders for sensible defaults

For example, we want all of our jobs to have:

  • Build discard configured (aka “log rotation”)
  • Emails for failure and fixed
  • Broken build claiming
  • Colorized log output

To achieve that in the Jenkins UI, for all of our jobs, would require much discipline, oversight, or both. So in the spirit of making it easy to do the right thing, that’s part of our BaseJobBuilder. In one of our jobs.groovy files — which, remember from last time, gets turned Jenkins jobs via a seed job — we’d have something like this:

Here, BaseJobBuilder will create start creating DSL for a new job, add some goodies to it, and then return that job-dsl job object, which we can then use to further customize with plain-old-job-dsl.

Builders for complicated jobs / configurations

Another use case for Builders is to make it easy to do the right thing for potentially complicated jobs, especially if we want to provide the one right way to do something.

For example, we use a security tool called bdd-security for running penetration tests against our software, and we integrate that into our Jenkins pipelines. The Jenkins job for that can seem a little gnarly at first, and so we wrap it up in a BddSecurityJobBuilder, and then use it like so:

The “guts” of all our bdd-security jobs will then be exactly as we want them; should updates be required, we simply update the builder, and all jobs built from that builder get those updates within minutes. Obviously, a builder like this should still provide for us all the sensible defaults described above.

Common decision-making and code blocks

As we surveyed our Jenkins jobs, we observed some configuration patterns we realized we could simplify. First, as I mentioned in the first post, some jobs run in some Jenkinses, some run in all, some run in one. We needed a way to make that decision-making consistent and simple, without littering out Jenkins-as-code repositories with similar-but-different conditionals that mostly do the same thing. Thus, we created an Environment utility, which depends on a few global variables we ensure exist in all our Jenkinses. It looks like this in practice:

We also use these utilities — a better word would probably be “blocks” — for adding custom DSL blocks, especially for gnarly configs that would require a configure block since a nice DSL API doesn’t exist.

Those utils are here.

Builders pattern for use internally

Finally, the 4th motivation for Builders was to establish a pattern for use internally in our Jenkins-as-code repositories, particularly for jobs / job types that didn’t rise to the level of reusability that it’d be useful in our public git repo, or which would’ve been unduly burdened by prematurely genericizing (or, when we simply can’t justify the time to invest in genericizing)

For example, one of our projects has a common pattern of “Task” style jobs. These aren’t build pipelines or related to deployments; rather, they’re just… for running stuff. They almost all use a python virtual env, often run on some type of schedule, and, well, other stuff.

Here’s a stripped down version, and this is kept in an internal (non-github.com) repo:

And then jobs in that repo can use it just like any other Builder:

It may come to pass that we realize patterns in these custom internal builders, and at that time, put in the time to make them generic and pull them out into our public repo.

Getting started with the custom builders

A starter repository for easily getting started with these customizations. We use this repo as the starting point for all our internal Jenkins-as-code repositories.

Next up: orchestrating project job repositories

I’ve mentioned “internal Jenkins-as-code repositories” several times throughout this post. Rather than have all our Jenkins jobs in a single monolithic repository, we believe that projects / project teams should be able to keep their jobs in repo locations that make the most sense to them. That does impose several burdens, however:

  1. finding those repositories
  2. configuring their seed job in all the Jenkinses in which those project’s jobs should run

We solve that via a Registration pattern, which I cover in the next post.

Jenkins-as-code: job-dsl-plugin

In the first post in this series, I covered the problems we were having with job creation and maintenance and a very high level look at our solution.

In this post, I’ll go more in depth with the solution, covering the job-dsl-plugin. I’m not going to recreate others’ documentation here, so for actual instructions, I’ll simply link to existing docs.

job-dsl-plugin

I’m assuming if you’re reading this that you’re quite familiar with creating Jenkins jobs, point-and-click style, via the Jenkins GUI.  job-dsl-plugin is a Jenkins plugin that enables you to define Jenkins jobs in plain text, using a really nice Groovy-based domain specific language (DSL).

Basic example

There’s no limit to the number of jobs you can keep in a single script.

There’s no limit to the number of scripts you can have.

At my organization, we’ve observed that some teams like to keep everything in a single “jobs.groovy”; some teams like a single job per script; some split jobs across functional lines, such as “Build Flows” and “Management Tasks”. job-dsl-plugin imposes no limits on how you structure your jobs with respect to file system. The only requirement is that the script files be valid groovy script names, and the contents of the scripts be valid Groovy.

I’ll go more in depth momentarily, but I imagine the burning question right now is, so how do I turn this DSL into actual Jenkins jobs?

Seed jobs

Your job scripts live in source control, but how do you turn them into Jenkins jobs? The answer: seed jobs.

A seed job is a job you create in Jenkins, pulling in the repo where you keep your job scripts, with a Build Step of Process Job DSLsYou tell that build step where your job scripts live in the workspace.

Then, you run the job. If all is well, it’ll create the jobs as you’ve configured them in your groovy script(s).

It’s a good idea to set up your seed job to run when your jobs SCM repo is updated, obviously, so add that SCM trigger.

For a fuller treatment of seed jobs, follow the tutorial.

How does this whole thing work, anyway?

When you create a job in the Jenkins GUI, that job is stored in config.xml.

The job-dsl-plugin is simply another way of creating config.xml; in this case, it’s by processing a DSL, not by pointing and clicking.

DSL support for plugins is currently added in 3 ways:

  1. by contributing to the job-dsl-plugin itself. Many, many Jenkins plugins have already been added, and the plugin is currently very actively maintained and updated frequently.
  2. by plugins implementing the job-dsl Extension points
  3. the newest method, the auto-generated DSL

This last method does has some potentially serious limitations, namely only running in Jenkins and not via the command line (an important one for us; more on that in a later post).

But what if a plugin hasn’t been added to the DSL API? Enter configure blocks.

Custom configure blocks

When we were investigating text-based job solutions, one requirement was that any solution we chose must not limit us in any way. We needed assurance that anything we could do in the Jenkins GUI, we could do in text.

job-dsl-plugin satisfies this requirement. While all core jenkins job behavior and many plugins are available directly in the DSL — for example, “triggers” and “cron” and “steps” — anything not currently supported in the DSL is fairly easily added to a job via a configure block

These blocks — just like all DSL configurations — ultimately result in modifications to the job’s config.xml file. The configure block is a direct mapping of keywords to XML elements and attributes:

Will result in the job’s config.xml having a new element ‘com.checkmarx.jenkins.CxScanBuilder’ added under the project’s builders, with sub-elements for serverUrl, username, password, and so forth. The doc, linked above, clearly illustrates how to use custom configure blocks.

There is an up-front cost to using configure blocks: first, you have to find or create a job and configure it in the Jenkins UI, then inspect the generated XML to see what elements and attributes you need to use in the configure block. For us, in the handful of cases we’ve needed to use custom configure blocks, it’s well worth the few minutes of manual configuration to learn what we need to learn. Plus, configure blocks are easily reusable.

Learning more

The job-dsl API viewer is pure gold.

As is the job-dsl wiki

Next up: our custom builders

Now that I’ve covered the job-dsl foundation, in the next post I’ll describe our approach to creating reusable job builders for the patterns we saw emerge across projects.