Goin’ back to Windows: Docker

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

Note: Running Docker on Windows requires Windows 10 Pro. The necessary virtualization features are not available on Windows 10 Home.

Table of Contents

This post is juuuust long enough that it probably helps to know what I’ll cover:

  1. My hopes and expectations for working with Docker on Windows from within WSL
  2. Docker for Windows in Action (i.e., screenshots of it all working)
  3. Enabling virtualization
  4. Installing Docker for Windows
  5. Options for a Docker client in WSL (by far the longest part of this post)
  6. docker-compose in WSL
  7. VSCode Integration

Let’s get started.

Beginning with the end in mind

As I mentioned in the first post in this series, my motivation for moving back to Windows was that I needed a new laptop. It needed to be a solid developer laptop, and nowadays, that means being able to work with Docker.

After a bit of research, I knew:

  • I’d need Windows 10 Pro, since the required virtualization features aren’t available on Windows 10 Home
  • Docker should work from within Windows Subsystem for Linux (WSL), though I wasn’t sure of the particulars

Ultimately, I wanted the following:

  • All docker functionality — running public docker images, building and running my own images, etc — from within WSL
  • All docker-compose functionality from within WSL
  • Easy to keep all docker software up to date and not stuck on old versions
  • Practically seamless use of Docker… at least as easy as on Linux, and certainly better than the docker-machine rigamarole I had become accustomed to on Mac.
  • In short: a first-class Docker developer experience

I’m pleased to report that so far, all those expectations have been met. Docker on Windows has been, for me, a joy.

Docker on Windows in Action

Here are some screenshots of Docker on Windows in action:

Docker in WSL
Running Docker from within WSL
Docker from Powershell
Running Docker from PowerShell
Building a Docker image from within WSL
Building a Docker image from within WSL
Running Docker Compose from within WSL
Running Docker Compose from within WSL
It’s a modern version of stable Docker; switching to Edge is pretty simple, as well


As a bonus, here’s VS Code Integration:

Docker VS Code Integration
Docker VS Code Integration

Enabling Virtualization

Docker for Windows requires virtualization to be enabled, which probably doesn’t happen by default out of the box (it didn’t for me, at any rate).

On this new laptop, I followed these instructions which worked perfectly. On an older PC, they didn’t work and I needed to figure out how to get into the BIOS a different way (Shift-F2 or Shift-F8 at startup, IIRC).

Docker For Windows

Installing Docker for Windows was trivial with chocolatey:

choco install docker-for-windows

Obviously you can install with normal old download-and-double-click-the-exe, as well. Once installed, you can turn all manner of knobs if you need to:

Docker for Windows configuration
Docker for Windows configuration

In addition, in general, I’ve found the docs to be fantastic, including the troubleshooting docs.

Docker in WSL

During my research I found 3 separate ways to run Docker client from within WSL connecting to the Docker for Windows daemon:

  1. Use the Windows Docker client
  2. Use the Linux Docker client over TCP without TLS
  3. Use the Linux Docker client with a “relay” between WSL and Windows

Here’s a quick rundown of trade-offs I’ve seen so far for each of these 3 approaches:

Use the Windows Docker Client

Jessie Frazelle explained the seeming-magic of WSL internals in this excellent post. Bottom line: you can simply run the Windows docker.exe (which comes bundled with Docker for Windows) from within WSL, and it works really well.

Here’s an ugly example that uses the full path to docker.exe, just to demonstrate:

Running the Windows Docker client from within WSL
Running the Windows Docker client from within WSL

If you wanted to stick with this option, you’d want to symlink docker to that c/Program Files.../docker.exe so that you can simply run docker. You’d do that with something like:

sudo ln -s '/mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe' /usr/local/bin/docker

Pros of this approach:

  • Easy to do and works great
  • Always using a version of the docker and docker-compose clients that match the daemon
  • No need to maintain/update docker or docker-compose software in WSL
  • Surprisingly (to me), does not set 777 permissions on any files added from a Windows filesystem into the docker image (more on that in a bit)


  • There are bound to be differences between Windows Docker and Linux Docker clients, though I haven’t found any meaningful ones yet
  • As mentioned in the “relay” blog post below, using the Windows Docker client means that it probably wouldn’t match the Linux Docker client man pages
  • Perhaps you have a desire/need/use case for always using the Linux client; for example, maybe you want to guarantee that the behavior you see locally is the same as in your Linux-based CI/CD system (such as Jenkins)

Use the Linux Docker client over TCP without TLS

The next two options use the Linux Docker client rather than the client that ships with Windows for Docker.

If you intend to use the Linux Docker client, do not YOLO apt install docker.io. Follow the documented Linux client install instructions.

OK, so: for this “TCP without TLS” option, Nick Janetakis has a great blog post on how to use the Linux Docker client from within WSL using the Docker for Windows daemon, and I won’t attempt to recreate any of those instructions here.

Aside from installing docker-ce from within WSL, it’s otherwise just a 2-step affair that you only need to do once:

  1. Check a checkbox in the Docker for Windows config screen
  2. Add an environment variable EXPORT to your WSL ~/.bashrc file

One small note: when I did this, I did need to kill Docker for Windows and restart it after checking the checkbox, because the initial checking seemed to put it into a weird state. No idea whether that’s just a fluke.

Pros of this approach:

  • Easy to do, appears to work well
  • Using the actual Docker Linux client; behavior should match man pages and other usage of a Linux Docker client, such as within a Linux-based CI/CD system (e.g. Jenkins)


  • That scary “makes you vulnerable to remote code execution attacks. use with caution” language that accompanies the checkbox you check. I really do not know how exploitable this threat vector is… I am not a CISO, lawyer, doctor, rocket surgeon, etc.
  • Need to maintain/keep updated Linux Docker client software in addition to the Windows for Docker software

Personally, that first con raises enough of a hackle for me that I won’t use it, especially since this third option, up next, was easy to get going and hasn’t been a nuisance to me in practice.

Use the Linux Docker client with a “relay” between WSL and Windows

A third option — the one I actually started with — is to use the Linux Docker client but without that “TCP without TLS” checkbox. In this approach, you set up a relay between WSL and the Docker for Windows daemon.

In short, in addition to installing the docker-ce Linux Docker client, it involves:

  1. One-time creation of the docker-relay binary
  2. When working with the Linux Docker client, starting that relay

In addition, I did update my /etc/sudoers file so that I wouldn’t be prompted for a password every time I ran the relay.

Pros of this approach:

  • Easy (ish) to do; works great
  • Using the actual Linux Docker client; behavior should match man pages and other usage of a Linux Docker client, such as within a Linux-based CI/CD system (eg Jenkins)
  • Doesn’t suffer from the potential security vulnerability of the TCP without TLS approach, above


  • Have to remember to start the relay when working with the Linux Docker client (or auto-start it somehow when opening WSL)
  • Need to maintain/keep updated Linux Docker client software in addition to the Windows for Docker software

I’ve been using this option for a few months and it’s worked fine. Remembering to start the relay hasn’t been a nuisance in practice

A note on file permissions

Caveat: This might not matter to you at all!

I mentioned above that when doing docker build using the Windows Docker client, any files added from a Windows filesystem to the docker image do not get 777 permissions. In addition, the Docker client issues a warning about file system permissions (more details here.) Which begs the question: “Why on earth would you suspect that they would get 777 permissions?”

The answer is that because when you docker build from within WSL using the Linux client, any files you add do get 777 permissions.

For example, I keep all my development projects on the Windows filesystem, starting at c:\dev\projects. And from within WSL, I access them from /c/dev/projects. Yes, that means even from within WSL, I’m working on a Windows filesystem for all dev projects. If you list those files, you’ll see that everything gets world permissions (i.e. 777).

And when you build an image from the Linux client, if your stuff is on the Windows file system, any files that get added will by default retain those world permissions. Here’s an example, one after building with the Linux client, and one after building with the Windows client. The entrypoint.sh file is set to ls -la /entrypoint.sh so that you can easily see an example of the file permissions that I’m talking about:

After building with Linux client:

After building with Windows client:

This might not affect you if you’re building images whose Dockerfile is on the Linux file system within WSL. It might not matter to you at all. Or you may choose to just update your Dockerfile to explicitly set permissions on any files/directories that get added to the docker image.

Docker Compose

All of the options above for having the Docker client communicate with the Docker for Windows daemon apply to docker-compose.

If you choose to stick with using the Windows clients, you’d just want to symlink the Windows docker-compose.exe to docker-compose, similar to the docker.exe symlink shown above.

And if you choose to go with the Linux client, be sure to follow the documented instructions for installing docker-compose.

VSCode Integration

For VSCode integration:

  1. Install the Docker extension (ctrl-shift-x, search “docker”, Install)
  2. Optionally, if you have them, Plug in your Docker Hub credentials if you want to navigate images that you’ve pushed to Docker Hub from within VS Code

Here’s that image again from above. Note the GUI panels on the left that list images and containers, and note the terminal integration underneath the editors.

Docker VS Code Integration
Docker VS Code Integration

This is interesting to me: regardless of what Docker client option you go with for how you interact with the Windows for Docker Daemon, VSCode is going to use the Windows client for its GUI integrations, such as listing images and containers. However, for interactions with those items — such as right-clicking an image and running it or attaching to a running container — it’s going to use whatever shell you have configured VSCode to use by default. So in the example above, note that I have configured it to use Bash (via WSL). Consequently, interacting with those images and containers from that configured shell are going to use whatever Docker client option  you choose from the options above.

Wrapping up

When I embarked upon this Goin-back-to-Windows experiment, I knew that Docker would be a kind of bellwether for me. If it worked how I hoped it would, then most likely I figured this experiment would overall be a pretty big success. And if it was janky and felt second-class, then most likely I’d end up ditching the experiment and dual-booting a Linux distro onto this new laptop.

I am, so far, very happy with the Docker experience on Windows.

Goin’ back to Windows: multiple terminal windows with ConEmu

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

Back when I was a full-time software developer, working on a Windows machine, I rarely needed cmd. I’d write batch files, sure, but I could launch those with Launchy or AutoHotKey or a toolbar mouse click.  Having a cmd window open all day just wasn’t a thing, for me. The only thing I might need a shell for was subversion or git, but most likely I used file system integration (i.e. point/click… boo, I know) or whatever IDE I was using at the time.

When I switched to Mac, and eventually Linux, having a shell running all day long was just how things worked. Right now, on my work laptop (MacBook Pro), I have half a dozen+ iTerm2 tabs open.

When you have a powerfull shell, you use it, a lot.

Multiple terminals on Windows

WSL made a powerful Linux shell on Windows a reality. But as of this writing, opening an Ubuntu bash shell only supports a single window. Sure, you can open multiple, separate shells, but that’s like web browsing pre-tab-browser. No thanks.

ConEmu makes it as simple as iTerm2 or Terminal to have multiple shells on Windows. It’s even easy to have multiple different terminals within a single ConEmu window. In my experience, the combination of WSL, supplemented with ConEmu, has made Windows finally stopped feeling like a second-class citizen development environment.

Check it out:

Just like Terminal (Linux) or iTerm2 (Mac), you can use the keyboard to create new tabs, cycle through tabs, and the like. apt install tmux and you can tmux, too.

ConEmu is highly customizable, though I tend to keep things default and just add keyboard shortcuts. My current setup is that an Ubuntu bash shell is the default shell, activated by the default win-w, and I’ve assigned win-p to Powershell. Here’s how to do that:

Since I’ve moved back to Windows, the combination of WSL for a Linux experience, and ConEmu for managing multiple terminal windows, has been a delight.

If you use Chocolatey, install it with choco install conemu, and off you go. Otherwise, download it at https://conemu.github.io/

Next post: Docker on Windows!

Goin’ back to Windows: Launchy

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

A very long time ago, before I had ever used Mac or Linux for personal computing, someone had convinced me of the value of a “launcher”: a program, usually invoked via alt-space, that would pop up a box and help you find stuff on your computer, launch programs / scripts, do quickie things like calculations, and otherwise keep your hands on the keyboard and off of the mouse.

At that time, the only game in town for Windows was Launchy.

When I started using a Mac for work, I tried out Spotlight, which is the default Mac launcher, and it felt OK but not even on par with Launchy. I quickly discovered Quicksilver and have stuck with it.

When I moved to Linux a few years ago, I started using Kupfer, though I don’t recall why. It worked just fine, but I was a n00b and had I known about GNOME-Do, I probably would have used that.

When I moved back to Windows, one of the first things I did was look for the current state of launchers on Windows. And, to my surprise, it seems that Launchy is still a favorite. Here’s what it looks like, exactly the same as it did in 2009:

Why not just the win key?

The win key is fine as an application launcher. It’s easy, fast, and just works.

What I like about Launchy, though, is that it also makes it easier to navigate the file system quickly. For example, let’s say I keep all my code in c:\dev\projects. If I want to navigate to that natively, I could hit the win key and then type c:\dev\projects. Or I could open up explorer and point-and-click to it.

But with Launchy, it’s as easy as

This is possible because Launchy lets you configure where it looks for stuff. In the case above, I can configure launch to catalog files or folders in a certain location:

Finally, Launchy includes a catalog of plugins and comes with some useful defaults. For example, I often need to so simple-ish calculations, and Launchy makes that really easy thanks to the Calc plugin:


This is all certainly not life-changing, earth-shattering stuff. But I spend a lot of time on a computer, and pointing-and-clicking all day long is inefficient and unenjoyable. I like tiny time-saving, joy-boosting things, and a launcher like Launchy serves nicely.

Next post: Multiple terminal windows with ConEmu


Lambda: using AWS SAM Local to test Lambda functions locally on Windows

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

AWS SAM Local is a “CLI tool for local development and testing of Serverless applications.” It uses Docker to simulate a Lambda-like experience. The docs explain well how to get started, and the GitHub repo has lots of samples as well. As of this writing, it supports python, java, .net, and nodejs.

This is a quick post to show how to use it in Windows Subsystem for Linux (WSL) and Docker For Windows.

Installing on Windows

The instructions recommend installing with npm. That didn’t work for me, giving me errors about file not found. I’m not sure if this is a problem with npm inside of WSL, if it’s a problem with the current installer, or what.

I did get it installed by using the next option, which is go get github.com/awslabs/aws-sam-local. I then aliased that to sam: alias sam='aws-sam-local'

Running on Windows

Out of the box, using Docker for Windows as the Docker daemon, the sam command itself worked fine, but actually invoking a function did not work for me when using the docker client on Ubuntu within WSL. With a simple python hello-world example, I’d get this:

marc@ulysses:sam-local-play$ sam local invoke "HelloWorldFunction" -e event.json
2018/01/20 08:59:18 Successfully parsed template.yml
2018/01/20 08:59:18 Connected to Docker 1.35
Unable to import module 'main': No module named 'main'
END RequestId: c9edd13a-000e-49bd-a4a7-a8a23258a03b
REPORT RequestId: c9edd13a-000e-49bd-a4a7-a8a23258a03b Duration: 1 ms Billed Duration: 0 ms Memory Size: 0 MB Max Memory Used: 19 MB
{"errorMessage": "Unable to import module 'main'"}

In the instructions below, I’ll tie together several GitHub issues and a gist from three separate GH users.

First, Kivol, in this  GH issue comment from GH user, shows a 2-fold solution:

1. bind-mount /c to /mnt/c and then within Ubuntu make sure you’re on /c (from this comment on another GH issue from aseering)

2. pass --docker-volume-basedir in the sam invoke command

Here’s how to do all that (again, mostly copying from a few GH issues and adding some color commentary):

# bind-mount /c. this mount lasts as long as your current terminal session; instructions below for how to make this persistent if this works for you
$ sudo mkdir /c
$ sudo mount --bind /mnt/c /c

# now you can use /c instead of /mnt/c
$ cd /c/path/to/project $ sam local invoke --docker-volume-basedir $(pwd -P) --event event.json "HelloWorldFunction"

And voila, it worked. I get this beautiful output:

2018/01/20 12:16:08 Successfully parsed template.yml
2018/01/20 12:16:08 Connected to Docker 1.35
2018/01/20 12:16:08 Fetching lambci/lambda:python3.6 image for python3.6 runtime...
python3.6: Pulling from lambci/lambda
Digest: sha256:0682e157b34e18cf182b2aaffb501971c7a0c08c785f337629122b7de34e3945
Status: Image is up to date for lambci/lambda:python3.6
2018/01/20 12:16:09 Invoking main.handler (python3.6)
2018/01/20 12:16:09 Mounting /c/dev/projects/lambda-projects/sam-local-play as /var/task:ro inside runtime container
START RequestId: d7a9dcb0-751e-445d-9566-a025f4e804b0 Version: $LATEST
Loading function
value1 = value1
value2 = value2
value3 = value3
END RequestId: d7a9dcb0-751e-445d-9566-a025f4e804b0
REPORT RequestId: d7a9dcb0-751e-445d-9566-a025f4e804b0 Duration: 4 ms Billed Duration: 0 ms Memory Size: 0 MB Max Memory Used: 19 MB

Note: that cd /c/... is really important! If you do all the above but stay in your shell at /mnt/c — like I did 🙂 — it’s still not going to work.

Big thanks to GitHub users Kivol and aseering for putting this together!

Persistently mounting /c

If the above worked for you, then you’ll probably also want to persistently mount /c so that you don’t have to redo it every time you want to use sam from within Linux / WSL.

If you’ve got a Linux background, you’re thinking: just mount it in /etc/fstab. It seems that as of now, anyway, WSL isn’t loading /etc/fstab entries when you open a new shell (i.e. it seems as if you have to run mount -a every time), at least according to comments in this MSDN post and this WSL issue.

Fortunately, linked in those comments, sgtoj has a gist that sets all this up nicely. I saved this locally, ran it once, and now /c is mounted for all new Linux sessions.


Coming from doing all personal development on Linux for the past 3 or so years, these kinds of hacks are disappointing. So far in this experiment with going back to Windows, these hacks have been few, and so far for me have all been related to wanting to use a docker client inside of Linux / WSL. More on that in a future post.

Suffice to say: yeah, it’s hacky, but it’s not that bad. Annoying, sure, but certainly not enough to sully the overall experience so far in moving back to Windows. I can live with this one.

A note on aws-sam-local and Powershell

After initially failing to get aws-sam-local running successfully within WSL, I figured I’d try it out all on the Windows side of the house. I ran into problems there, too. First, using go-get to install it, I got path-too-long errors. WTF. That led me to choco install nodejs and use the npm install route. That did work, and sam invoke worked fine.

So far, this is the first time that working in WSL was kind of a shit-show. Thankfully smart people figured out how to get it working correctly. I am always grateful when I find answers in GitHub comments along the lines of “maybe sharing this will help.”

I really did not want to have to use Powershell for this. Not because I don’t like Powershell, but mostly because it kind of pierces the vale of the single development experience I’m trying to achieve, I guess. It’s a bit of a context switch to be doing most of the work in WSL, and then for this one thing, need to pop over to Powershell, which also means maintaining duplicate installs of software in Windows land and WSL land. A small thing, for sure, but I’d like to avoid it if possible.

Next post: Launchy

Windows IPv6 slow-or-broken: resolved

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

Chocolatey slowness?

When I originally posted about package management with Chocolatey, I mentioned two problems I had on a brand new laptop: 1) inability to download large packages; 2) general slowness when downloading.

Turns out, these are not Chocolatey’s problems at all.

Wait… Powershell too?

I noticed when working in Powershell that curl, which just wraps Invoke-WebRequest, was taking a really long time. Simple commands were returning results like this:

PS C:\WINDOWS\system32> Measure-Command { curl https://microsoft.com }

Days : 0
Hours : 0
Minutes : 0
Seconds : 43
Milliseconds : 316
Ticks : 433164486
TotalDays : 0.000501347784722222
TotalHours : 0.0120323468333333
TotalMinutes : 0.72194081
TotalSeconds : 43.3164486
TotalMilliseconds : 43316.4486

It would consistently take about 43 seconds.

Resolution: my IPv6 settings

I narrowed it down to a problem on my network — either the main PC, router, or modem — by trying out the curl command above while connected to my phone’s WiFi hotspot instead of our home router.

I’ll spare the sleuthery for later and cut to the chase:  I resolved the problem two separate ways:

  1. by setting the IPv6 DNS settings in my wireless router (A D-Link DIR-880L); I did not stick with this
  2. ultimately, by finding the broken DNS setting in my primary PC’s IPv6 config. This is the one I went with, but the first might be instructive or useful, so:

In my router’s config, I simply set the Primary and Secondary DNS server settings to Google’s DNS

Once I did that, the IPv6 connection in my router went from Not connected to Connected. My IPv6 readiness score went from 0/10 to 10/10, Powershell curl commands returned to reasonable amounts of time, and Chocolatey was both fast and able to download large packages (300+MB with no problem).

However, I couldn’t let go of something… why would I need to do this anyway? It occurred to me that a very, very long time ago — maybe as far back as 2010, when I first bought our current home PC, I had monkeyed with a DNS setting for some reason or another related to my job at the time.

So I went into the IPv6 settings (here’s how), and sure enough in the DNS tab I had overridden the default with some mumbo-jumbo I’ve long forgotten about. I set it back to the default, undid the changes I had made in the router, and voila, same expected behavior with Powershell, Chocolatey, and IPv6 readiness.

The End. Stop here if you don’t care about how I figured this out.

Sleuthery for those who like stories about troubleshooting

Some people like stories about troubleshooting — I know I love reading them — so here’s the story of troubleshooting this problem.

It is admittedly crazy on first take: that a buried-kinda-deep TCP/IP setting on one computer would not affect that computer, but would affect another computer on the network.

When I noticed the problem with Chocolatey originally, I saw that it was timing out in Invoke-WebRequest. That’s what eventually led me to even be curious about the behavior of curl / Invoke-WebRequest in Powershell itself, outside of Chocolatey. It just took me a few weeks (and a day off of work) to make time to investigate.

I noticed that initial requests were taking about 43 seconds, and a second request would take the same time as well. Then, subsequent requests would complete quickly. After a few seconds of waiting, they’d return to 43 seconds. A very helpful person on the StackOverflow post I made provided a quickie Powershell script to gather some data, which confirmed the above behavior.

I tried curl in Ubuntu via Windows Subsystem for Linux, and it looked fine.

I disabled, and then uninstalled, Antivirus. No effect.

I then connected to my phone’s WiFi hotspot instead of our home wireless, and voila, everything worked fine.

Now that’s interesting. Sounds like a network problem. Let’s try to isolate the various components in the home network and pin one down as the culprit.

I went to our primary PC and tried an Invoke-WebRequest in Powershell, and it was fine, too.

So here’s where I was: 1 computer on the network worked fine, and 1 didn’t. What’s the difference?

Turns out, sometime within the past year, on the home PC, I needed to disable IPv6 for something related to Comcast email and Outlook, and I remembered that.

So on the laptop, I tried disabling IPv6 just as I had on the home PC, and that solved the problem on the laptop.


Except… why? Why did that work? It made no damn sense. I hated not understanding that.

At first, I tried investigating the problem via the wireless router. I went into the configs and everything was at the default, but I also noticed that it was telling me that IPv6 was Not Connected. Weird. I tried monkeying with a few settings whose purpose I didn’t know, and that made no difference. I then returned it to the default setting and on a whim looked up Google’s DNS IPv6 servers and plugged them in.

After restarting the router, the router showed me Connected for IPv6, and back on the laptop I re-enabled IPv6 and everything seemed to work fine.

To be clear: changing DNS here wasn’t a momentary stroke of brilliance. Often, my troubleshooting strategy boils down to “I wonder what happens if I twist this knob,” and that’s what this was. In my line of work, I’ve seen DNS cause all manner of network goofiness, so this was kind of like “hmmm…. here’s an empty text field related to DNS, and since DNS can act-a-fool, I wonder what happens if I put something legit in it.”

Fantastic-er. Can I be done now?

Why? I mean, sure, it worked, but why would I need to override DNS in the router?  That, also, just didn’t sit right. And I could not let it go.

The only thing I could think of at this point was that there was something wonky about the home PC’s IPv6 settings, and overriding the DNS settings in the router was working around that wonkiness.

I then went into the home PC’s IPv6 settings, and everything looked default.

Clicked “Advanced”, and then “DNS”, and that’s when I saw it: There was a radio button checked and then it had a text box with DNS entries that I have no recollection of setting, and one of the FQDN’s was the internal domain for one of my old jobs. WTF?

So then I returned that setting back to the default, undid the router DNS change, and success!

Though… there’s something else: I’ve had Mac, Linux, Android, iOS, and Windows devices using this network for years and none have hit this problem. Just Powershell. Curious, don’t you think?

Finally, if you’re looking for more of these troubleshooting stories, here’s one from a while back on The Curious Case of the Slow Jenkins Job.

Next post: AWS Lambda: Using SAM Local to test Lambda functions locally on Windows