This post describes how I built a docker container that runs an express server, which serves an AngularJS app, written in TypeScript, with live reloading, remote Chrome debugging, with Source Maps served from a different directory, and VS Code breakpoints.
The updates to this application now enable any developer to spin up a development Docker container quickly. Once running, it is reasonably straightforward to edit and debug the application with Chrome and Visual Studio Code, with the code executing inside of the container.
Problem
Given an AngularJS web app served by a Node/Express app, provide a Docker solution to simplify both deployment and development.
Requirements
- Ensure existing Dockerfile will work to deploy into corporate container host. Modify it if necessary.
- Without affecting the production Dockerfile, provide a way for new developers to get their environment up and running quickly.
- (Optional) Provide a way for live editing of the running web app inside of the container.
- (Optional) Provide a way to connect Chrome Dev Tools to the built code.
- (Optional) Provide a way for Dev Tools to access source maps which are located outside of the built assets folder.
- (Optional) Connect VS Code Debugger from outside the container.
Constraints
- Must start with the official Dockerfile provided, which contains the base OS requirements for the particular Node environment and Docker host.
- The app is built with AngularJS 1.4 and may not be upgraded for this project.
- The primary development language for the application is TypeScript.
- The app is built with the Grunt CLI and a pre-tested grunt configuration, which may not be changed.
- The built application artifacts are bundled, minified, and packaged into the ./dist folder.
- Source maps are generated but stored alongside the original TS files under the ./src folder.
- The application uses many deprecated npm and bower packages, and even node itself is outdated.
Base Dockerfile
The first thing I did was grab the official Dockerfile provided for hosting Node/Express applications for the appropriate version of Node. Unfortunately, this particular app requires an old, unsupported version of Node. That, however, is a problem for another day. The strategies I outline should work regardless of the Node version in use.
FROM node:0.10.38-slim
# Expose PORT
EXPOSE 8626
# Set Current Working Directory
WORKDIR /app
# Set production environment
ENV NODE_ENV=production
## Copy dist folder to application
COPY dist/package.json /app/
RUN npm install
COPY dist /app
The base image is prebuilt image internal to the company, but it is based on Debian “Buster.”
The hope was that at the very least, the application could be built and installed with the official Dockerfile with no, or only minor modifications.
Upon the initial docker build
command, the process hung at the npm install
phase. Something the application needs required a step that used python. This was something other teams here had encountered and had a known solution. I simply needed to add a line to the Dockerfile to install Python, which I did by adding this command immediately after setting the current working directory.
# Install Python and supporting build tools
RUN apt-get update && apt-get install -y \
python \
&& rm -rf /var/lib/apt/lists/*
# rm -rf ^ removes cache, reducing image size
At that point, my npm install
got farther, but still failed due to an npm package that was being referenced with git+ssh
protocol. Our official Docker image intentionally omits git. Instead of installing git into the image, I spent a few hours investigating why that particular package was being included that way, and what it was ultimately doing.
As it turned out, the package in question was only used to help manage bower package versions, and was only executed prior to the bower install
. As our official CICD system has its own way of managing that, it became clear that this particular npm package is only needed during local development. I moved the npm package to the devDependencies
section in package.json, and provided some new scripts that would only be run outside the container:
"postinstall": "npm run bower",
"prebower": "node ./scripts/npm/preinstall.js",
"bower": "bower install",
"postbower": "node ./scripts/npm/postinstall.js",
To prevent the git+ssh
package from being installed as part of the docker build
, I modified the final npm install
command in the Docker file to include the --production
flag, thus bypassing the devDependencies
entirely.
Now, as soon as the npm install
command completes, npm will execute the three bower
scripts in order. Our CICD system runs npm install
outside of a container and should have no problem running these commands.
The final Dockerfile now looks like this.
FROM node:0.10.38-slim
# Expose PORT
EXPOSE 8626
# Set Current Working Directory
WORKDIR /app
# Install Python and supporting build tools
RUN apt-get update && apt-get install -y \
python \
&& rm -rf /var/lib/apt/lists/*
# rm -rf ^ removes cache, reducing image size
# Set production environment
ENV NODE_ENV=production
## Copy dist folder to application
COPY dist/package.json /app/
RUN npm install --production
COPY dist /app
The --production
flag to npm install
skips installing anything listed under devDependencies
, so my git+ssh
issue never arises.
The only potential downside I see is with the three new bower scripts. The docker build
calls npm install
as one of its final steps, but the older version of npm it is running does not execute the postinstall
command, so will not run the bower scripts. This works because of a required build step that has to occur outside of docker build
, namely a grunt build
that packages the entire application into the ./dist folder. This includes all of the bower dependencies, so a separate bower install
is simply not needed. In future versions where Node is upgraded, it might be required to run npm install --production
completely outside of the Dockerfile and simply copy the node_modules folder.
As it is, everything should work for now.
Local Development
The next thing I wanted to do was to have a local docker image that could be used by new developers. Ideally, I would use the Dockerfile I already had working. However, it was not that simple. The first thing I realized is that none of the grunt
commands would work inside the container because only the built artifacts are copied into it. There is also no grunt-cli
installed into the container.
As I mentioned above, we have a complete system of grunt commands. The commands are used to serve the app under development with live reloading, and also to build and package the final application for deployment.
After a few failed experiments with the existing Dockerfile, I decided the most prudent course of action would be create a parallel Dockerfile.dev
that was mostly the same, but that I could modify to suit local development needs. This is the file I came up with, which I called Dockerfile.dev
.
FROM node:0.10.38-slim
# Expose PORT
EXPOSE 8627
# Set Current Working Directory
WORKDIR /app
# Set production environment
ENV NODE_ENV=development
The two files are identical, except I completely leave out the python installation, the production version’s npm install
and COPY
commands at the end. Instead, I handle the source code, npm, bower, and grunt a little differently.
Specifically, I added a series of scripts to package.json to handle most of the complexity. This will enable a new developer to clone the repo, perform an npm install
, and then run a few more npm scripts to configure the Docker environment.
Local Docker Build
First, I needed a simple way to build a docker image from the new Dockerfile.dev file.
"docker:build": "docker build -f Dockerfile.dev -t my-cool-app-ui:latest .",
Now, executing npm run docker:build
will create a new image based on the development Dockerfile.
Local Docker Container Execution
Next, I wanted to provide a similar experience for managing the local docker container.
"serve": "grunt serve --no-browser-sync",
"docker:start": "docker run -d --rm --privileged --name my-cool-app-ui --env-file .env -p 8627:8627 -v $PWD:/app my-cool-app-ui npm run serve",
"docker:stop": "docker kill my-cool-app-ui",
Because I did not want to install any npm packages globally into the container, I added grunt-cli
to the application’s devDependencies
. That enabled me to provide a serve
script in package.json that could launch grunt inside the container.
The next bit of magic is inside the docker:start
script. It is a simple docker run
command. The parameters chosen are important.
Detached/Daemon Mode
The -d
parameter causes the container to be started in a detached state, meaning it is not interactive. It will start and run in the background.
Remove on Exit
The --rm
parameter is for housekeeping purposes. When the container exits, it is removed immediately, freeing up any resources it was consuming.
Privileged Mode
The --privileged
parameter provides elevated access to the container. This was recommended to me by a colleague, but I am not convinced it is necessary. I may remove it and see whether it still works.
Container Name
The --name parameter
simply provides a friendly name for the container, rather than one randomly-generated by Docker.
Environment Variables
The --env-file .env
parameter indicates that the container should be started with the environment variables specified in a local file .env
. In our applications, the .env
file contains environment-specific configuration and secrets that are needed at runtime, but are not stored with the application or in source control. These types of things could include database connection strings, authentication tokens, etc. Storing them in a local file that is not stored in source control keeps sensitive information out of publicly-visible files (like package.json or Dockerfile).
Port Mapping
When grunt serve
starts the Node/Express app, it is configured to listen on TCP port 8627. For some reason, the production version of the app is configured to listen on port 8626. The parameter -p 8627:8627
maps port 8627 locally to the container’s port 8627. I will revisit this inconsistency at some point and clarify which port it should be.
Local Volume Mapping
The next parameter -v $PWD:/app
maps the current directory to a directory inside the container called /app
. Recall that the Dockerfile indicates that the working directory for the container is /app
.
For me, this was the real breakthrough. Rather than fighting with grunt, npm, and bower inside the container, I simply map the entire project folder into the container’s /app
folder.
Docker Image to Use
The parameter my-cool-app-ui
is the name of the Docker image to execute. This is second time this identifier is used in the command, which is an unfortunate naming on my part. I inadvertently ended up naming the image and the container identically. I intend to fix that in the future.
Command to Launch
The final three parameters of the docker run
command, npm run serve
, represent the command that Docker should execute inside the container. In this case, I am executing the serve
script inside of package.json.
Stopping the Container
I also provided another script to stop the container, which simply executes the command docker kill my-cool-app-ui
. This stops the detached container named my-cool-app-ui
.
Interactive Alternative
Finally, I created an alternative version of the docker run
command that works interactively.
"docker:run": "docker run -it --rm --privileged --name my-cool-app-ui --env-file .env -p 8626:8627 -v $PWD/:/app my-cool-app-ui /bin/bash",
It is essentially the exact command as the last one with two exceptions:
- It uses the
-it
parameter to create an interactive session instead of a detached session. - It executes
/bin/bash
to create a shell instead of simply runninggrunt serve
.
If you want to launch the container interactively, simply execute npm run docker:run
. To stop and remove the container, you just need to log out of it.
This enabled me to experiment with various grunt
commands and view web logs while I was still trying to figure things out. I felt it was useful enough to keep.
Debugging the TypeScript
The next (optional) requirement was the ability to use the Chrome debugger to set breakpoints in the application’s TypeScript source code. The problem was that the TypeScript and generated source maps are stored in completely different locations that the files generated by the grunt serve
command.
The solution is far simpler than the amount of research and experimentation that went into it. It ultimately came down to including three lines of code into the project’s tsconfig.json
file.
"sourceMap": true,
"mapRoot": "/src/client/my-cool-app-ui",
"sourceRoot": "/src/client/my-cool-app-ui"
These three lines need to be included in the compilerOptions
section of the file. The first line was already there, indicating that source maps should be created.
The mapRoop
and sourceRoot
lines took some time to get right. These values end up being prepended to the name of the sourcemap file, which is specified at the end of each generated .js
file.
When the browser downloads a JavaScript file, it checks the end of the file for the location of any source map. It then makes a separate HTTP request for the file found there. This works transparently when the source map file is located right next to the JavaScript file. In my case, however, that was not true. I discovered this by watching all of the HTTP 404 errors for each attempted request.
My solution was to prepend a different path to each source map, which corresponded to the source folder. With the mapRoot
set appropriately, instead of an HTTP request that looks something like this…
http://localhost:8627/my-cool-app-ui/feature/coolthing.js.map
We would get this:
http://localhost:8627/src/client/my-cool-app-ui/feature/coolthing.js.map
That still resulted in a 404, even though the folder was now correct.
The solution to this was simple, however. I modified the Express app, only in development
mode, to serve static files from the src/
folder. Voila! The source code and source map requests suddenly worked.
At that point, the Chrome debugger was fully functional, including the ability to set break points directly inside the TypeScript files.
VS Code Debugging
The final requirement was to make all of the debugging features work from inside of VS Code, which already supports Chrome debugging through its Debugger for Chrome Extension.
Once this extension was installed, I only had to add the appropriate configuration inside of .vscode/launch.json
. This is the configuration that ultimately worked, which I believe is the default.
{
"name": "Launch Chrome",
"request": "launch",
"type": "pwa-chrome",
"url": "http://localhost:3000/individual-entry-codes",
"sourceMaps": true,
"trace": false
}
The only thing I did was change the url
to a page I wanted to load, and the trace
property. I set its value to true
during my experiments, which provides more verbose logging in VS Code’s Debug Console. I set it back to false
for most ordinary uses.
Docker Compose
This document describes my adventures in getting the UI project up and running in Docker. However, there is also the API project to consider. In production, the two execute in separate Docker containers primarily so they can be scaled independently from one another. A load balancer sits in front of them to manage the traffic and to provide a single base URL, enabling them to share cookies. The load balancer acts as a proxy, routing calls to the API based on the URL route. The next step in my journey is to replicate that experience locally.
Dockerizing the API
As it turns out, executing the API project in Docker worked exactly the same was as I described above for the UI project, proving that the process is repeatable.
Once I had both the UI and API projects running separately inside of their own Docker containers, I needed a way to route traffic between them.
HA Proxy
Enter HA Proxy. This docker image was recommended as the one to use, though I have also used Nginx and it works just fine. The are both serving the same function in this case. I will detail HA Proxy, as that is the one I used for this project.
What is It?
From the explanation in the Docker registry,
HAProxy is a free, open source high availability solution, providing load balancing and proxying for TCP and HTTP-based applications by spreading requests across multiple servers.
That is exactly what I needed.
Configuration
The hardest part about any of these tasks is figuring out the appropriate configuration settings. As I said, someone else gave me this, so it only took some minor tweaking.
The Dockerfile itself was only two lines, and also comes straight from DockerHub:
FROM haproxy:1.7
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Nothing exciting to see there. It simply grabs a specific (albeit older) version of HA Proxy and copies a config file.
This is that config file:
global
daemon # Run in the background
defaults # These apply to everything and were provided to me
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes # Public-facing config
bind *:3000 # Listen on port 3000
mode http
acl is_api_url path_beg -i /api/ # Check to see if path starts with /api/
use_backend api_server if is_api_url # If so, forward to api_server
default_backend static_server # Otherwise, default to static_server (UI)
backend static_server # Static (UI) Server Config
mode http
balance roundrobin
option forwardfor # Forward traffic to...
server web01 ui:8627 # A server called ui on port 8627
backend api_server # API Server Config
mode http
balance roundrobin
option forwardfor # Forward traffic to...
server web01 api:8181 # A server called api on port 8181
The only edits I made to this file were on the two server lines. They were set to localhost
, which worked when I ran the two Docker containers locally. They exposed their respective ports. However, when I decided to try using Docker Compose, I intentionally configured its network to hide those ports.
Docker-Compose Configuration
Now I will show the docker-compose.yaml file I came up with. The sole consideration for me was to have Docker automatically build (if necessary) and launch all three containers for me.
version: '2'
services:
api: # This is the config for the API
image: my-cool-app-api:latest # Name of the docker image to run
build: # Or build if it isn't available
context: ../my-cool-app-api/ # Look here for its dockerfile
dockerfile: Dockerfile.dev # Using this file as the dockerfile
env_file:
- ../my-cool-app-api/.env # Get environment vars from this file
volumes:
- ../my-cool-app-api/:/app # Map the entire folder to /app
restart: unless-stopped # Restart the container unless I stop it
command: npm run serve # Run this command at container startup
expose:
- 8181 # Export port 8181
networks:
- my-network # Expose it to this named network
ui: # This is the config for the UI
image: my-cool-app-ui:latest # Name of the docker image to run
build: # Or build if it isn't available
context: . # Look here for its dockerfile
dockerfile: Dockerfile.dev # Using this file as the dockerfile
env_file:
- .env # Get environment vars from this file
volumes:
- .:/app # Map the entire folder to /app
restart: unless-stopped # Restart the container unless I stop it
command: npm run serve # Run this command at container startup
expose:
- 8627 # Expose port 8627
networks:
- my-network # Expose it to this named network
haproxy: # This is the config for the prooxy
depends_on: # It depends on the following services
- api # api, as defined above
- ui # ui, as defined above
image: my-haproxy:latest # Name of the docker image to run
build: # Or build if it isn't available
context: . # Look here for its dockerfile
dockerfile: Dockerfile-haproxy # Using this file as the dockerfile
ports:
- 3000:3000 # Make port 3000 available to the host
volumes: # Map my custom config file at runtime
- ./haproxy-compose.cfg:/usr/local/etc/haproxy/haproxy.cfg
restart: unless-stopped # Restart the container unless I stop it
networks:
- my-network # Expose it to this named network
networks:
my-network: # Create a default named network for containers
A few things are worth noting.
The names “api” and “ui” appearing in both the docker-compose.yaml
file the haproxy.cfg
file have to match. Docker will effectively perform runtime DNS resolution, and those names end becoming the hostnames inside the Docker-managed network.
The expose
lines in docker-compose.yaml
expose those ports to the Docker-managed network, but do not expose those ports publicly to the Docker host. The ports
entry in the haproxy
configuration does expose port 3000 to the host.
Tying it All Together
Now that I have a working docker-compose.yaml
file, the only thing left is to start it all up with the command docker-compose up
. I like the fact that all log files are sent to stdout
and that I can watch the progress. Once everything is up and running, I can launch a browser to http://localhost:3000 to test my application. And yes, the debugging strategies detailed above for VS Code still work.
The only significant piece missing is the ability to debug my API layer, as those specifics are dependent on the backend technology. The strategy will be similar, whether the API is written in Node, C#, Java, PHP, etc. You need to ensure that the appropriate debugging ports are exposed to the host, and then connect your debugger through that port as usual.
Final Notes
I spent a lot of time reviewing the file diffs on the pull request I opened to implement all of these changes. In doing so, I discovered a few discrepancies, missing files, and even one unnecessary file. Typing this lengthy explanation helped me to catch and correct those oversights.
And that is everything. I managed to cover every requirement and then some. The result is better and more functional than I had hoped. My only remaining hope is that my sharing this experience will help someone else save some time. If you happen to find any tweaks that make it even better, please let me know.
Do you have any comments, questions, or just want to see more? Please follow me on Twitter and let me know.
Did I make any mistakes in this post? Feel free to suggest an edit.