Advertisement
Guest User

Untitled

a guest
Oct 14th, 2019
173
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.29 KB | None | 0 0
  1. # Question on how to best build Docker images
  2.  
  3. ## Initial situation
  4.  
  5. In most of my projects I need to run a few steps after I checkout the code from version control and before I can actually use (or work on) it. Examples include
  6.  
  7. - run `composer`, `npm`, `yarn`, ... to fetch dependencies
  8. - run `grunt`, `gulp` or similar front-end build pipelines
  9. - run some legacy code generation tools πŸ™€
  10.  
  11. ... and the list goes on.
  12.  
  13. ## Bonus points: SSH keys
  14.  
  15. To make things more interesting, I also need to access private repositories when fetching dependencies. That is, a suitable SSH key must be available for running any of the dependency management tools.
  16.  
  17. In my work environment this key is usually loaded into the `ssh-agent`, but not available as a plain file that I could copy into intermediate build stages.
  18.  
  19. However, mounting the `ssh-agent` socket into a _running_ container is pretty straightforward. _Building_ images is a different story, as bind mounts are not available at build time. Hacks using `socat` are rather ugly.
  20.  
  21. ## What about dev-prod-parity?
  22.  
  23. The above steps need to happen for the image that will finally go to production as well as for local development. The front-end build pipeline will, in fact, be run over and over again during development.
  24.  
  25. I *think* it would be smart if I would not have to make sure I am using the same tools (and versions of them) on my local machine for local development and inside a multi-stage `Dockerfile` to perform all the steps during image builds.
  26.  
  27. Wouldn't it be more _Docker-style_ to just _run_ a container for `gulp`, `npm` etc.? I could probably get away with the default base images in that case.
  28.  
  29. One way of doing this is by *running* the appropriate containers with my local workdir mounted into them. And by doing multi-stage build steps `FROM` them. In either case, both need to match each other, but the one is a `docker run -v ...`, the other is `RUN` inside the `Dockerfile`.
  30.  
  31. _Alternatively:_ Is it really a good approach to have an additional `Dockerfile.dev` that installs all of those tools, effectively working as a "Vagrant Box disguised as a container"?
  32.  
  33. ## Using the local workdir as a staging area?
  34.  
  35. What if I just use my local workdir (or a workdir on a CI server, FWIW) as a staging area?
  36.  
  37. 1. Checkout code
  38. 2. Run all necessary tools either as local installs or as Docker containers, one at a time. Mount workdir + SSH agent socket into each of them.
  39. 3. Vendors, build artifacts, ... end up in my local directory
  40. 4. Build final image by copying workdir into a standard Apache/ngnix/PHP/... base image
  41. 5. For development: Run that image, additionally mounting workdir into it
  42. 6. For development: Run (2) again as necessary.
  43.  
  44. Possible issues and things people have mentioned on this:
  45.  
  46. - The build context that is finally sent to Docker is huge when it contains vendors
  47. - "Don't mount your workdir into containers to get results out, that's an anti-pattern."
  48. - "The build process should entirely be described in the Dockerfile, not run in your local shell or depend on your workdir."
  49. - "Building the Docker image for production and development are different things anyway. Use different Dockerfiles or even stick with Vagrant for development."
  50.  
  51. Did you encounter similar issues already? Are there alternative techniques that work well for you?
  52.  
  53. Please share your comments πŸ‘‡πŸ». πŸ™πŸ»
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement