Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # Question on how to best build Docker images
- ## Initial situation
- In most of my projects I need to run a few steps after I checkout the code from version control and before I can actually use (or work on) it. Examples include
- - run `composer`, `npm`, `yarn`, ... to fetch dependencies
- - run `grunt`, `gulp` or similar front-end build pipelines
- - run some legacy code generation tools π
- ... and the list goes on.
- ## Bonus points: SSH keys
- To make things more interesting, I also need to access private repositories when fetching dependencies. That is, a suitable SSH key must be available for running any of the dependency management tools.
- In my work environment this key is usually loaded into the `ssh-agent`, but not available as a plain file that I could copy into intermediate build stages.
- However, mounting the `ssh-agent` socket into a _running_ container is pretty straightforward. _Building_ images is a different story, as bind mounts are not available at build time. Hacks using `socat` are rather ugly.
- ## What about dev-prod-parity?
- The above steps need to happen for the image that will finally go to production as well as for local development. The front-end build pipeline will, in fact, be run over and over again during development.
- I *think* it would be smart if I would not have to make sure I am using the same tools (and versions of them) on my local machine for local development and inside a multi-stage `Dockerfile` to perform all the steps during image builds.
- Wouldn't it be more _Docker-style_ to just _run_ a container for `gulp`, `npm` etc.? I could probably get away with the default base images in that case.
- One way of doing this is by *running* the appropriate containers with my local workdir mounted into them. And by doing multi-stage build steps `FROM` them. In either case, both need to match each other, but the one is a `docker run -v ...`, the other is `RUN` inside the `Dockerfile`.
- _Alternatively:_ Is it really a good approach to have an additional `Dockerfile.dev` that installs all of those tools, effectively working as a "Vagrant Box disguised as a container"?
- ## Using the local workdir as a staging area?
- What if I just use my local workdir (or a workdir on a CI server, FWIW) as a staging area?
- 1. Checkout code
- 2. Run all necessary tools either as local installs or as Docker containers, one at a time. Mount workdir + SSH agent socket into each of them.
- 3. Vendors, build artifacts, ... end up in my local directory
- 4. Build final image by copying workdir into a standard Apache/ngnix/PHP/... base image
- 5. For development: Run that image, additionally mounting workdir into it
- 6. For development: Run (2) again as necessary.
- Possible issues and things people have mentioned on this:
- - The build context that is finally sent to Docker is huge when it contains vendors
- - "Don't mount your workdir into containers to get results out, that's an anti-pattern."
- - "The build process should entirely be described in the Dockerfile, not run in your local shell or depend on your workdir."
- - "Building the Docker image for production and development are different things anyway. Use different Dockerfiles or even stick with Vagrant for development."
- Did you encounter similar issues already? Are there alternative techniques that work well for you?
- Please share your comments ππ». ππ»
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement