dockerfile set variable from command

In my case, I'm trying to run a command on the docker build host (not within the docker container). The suggested answers work and they're annoyingly tedious. You could use the assembly maven plugin (and MyJarName) to generate such a jar and thus not requiring to know the version number. Already on GitHub? Correct; a container should have no access to the host, unless explicitly given. Anyway, just wanted to share where my head is at as I'm running into this issue myself. Instructions in the Dockerfile run inside a container, not on the bare host, so don't have access to anything outside of that container (and/or files that were passed as context, in case of the experimental RUN --mount case). We > have to copy-and-paste this string whenever we use different version of Hadoop. Are there other reasons? However, it should not be able to access ~/.ssh/id_rsa on your host. I have java app that uses maven. RUN cat ~/.ssh/id_rsa runs inside the container; that's fine, as ~/.ssh/id_rsa then would have to be explicitly added to the build-context. Pardon the RTFM noise (hopefully repeat askers can be pointed here). Actually, looking at the tensorflow example, it looks like CLASSPATH is only needed at runtime, and not needed inside the Dockerfile (which is what this issue is about). If it's not needed in the Dockerfile, something like this would probably do the trick; Add an entrypoint script that sets the environment variable before starting the main process (using printf for illustration, but this would be hadoop classpath --glob; The issue there, is that those commands should not run on the host but inside the container. Talking about the security, since user can also write ENV LET_ME_PWN_YOU $(cat ~/.ssh/id_rsa) or RUN cat ~/.ssh/id_rsa, I think it is not the serious problem. We can't do that, as that would be a huge security issue. @thaJeztah pardon to belabor this thread, I'm curious about the reasoning behind this design decision. avoiding Apple Store and Google Play Store issues). docker build gets a context and parameters passed; it's up to the person starting the builds to specify those options, and those are the files that are sent to the daemon where the build is executed, using the context (files/folders) that it was given access to. To my original point and @tobegit3hub point: within an organization, I already trust Dockerfile that I have created or coworkers have created or or that is committed to my organization's private source code repository. It's easy to assume (re-assume) that because I enter the command docker build, that process knows about things inside and outside the under-construction container (which, on first glance, makes sense, it's how all other software "build" processes work; hence, this Issue). ENV exists in a way that appears to both be used in build but mainly in order to establish environment variables that are available in the container when it is run but perhaps something specific to build would make sense. So I'm left with the only option of doing RUN /bin/sh -l -c 'echo "${MY_ENV_VAR}"' if I want to use environment variables that are using data generated dynamically. But this merely added environment variable DOCKERBUILDHOST with the string value $(hostname). Of course what I would rather be able to do would be the following: If I could inject another point I'd say that it seems that the concept of ARG exists to specify values to be specified at build time by the invoker of the build. AFAICT, this is the only solid reason for this design decision. It seems very intuitive to me that the ENV build instruction would support "build-time command result expansion" (or however you'd like to phrase it). and the outputfile.yaml is available on stage-2, RUN echo -e '#!/bin/sh\nexec env CLASSPATH="`printf \"result of hadoop classpath --glob\"`" "$@"' > /entrypoint.sh && chmod +x /entrypoint.sh, RUN echo -e "import os\nprint('CLASSPATH IS: ' + os.environ.get('CLASSPATH'))\n" > /example.py, "python3 -c 'from sys import version_info as i; print(f, "Detected Python $(eval ${GET_PYTHON_VERSION})". So my guess is that this design decision prevents public repositories from hosting blatantly malicious docker images (e.g. To my original point and @tobegit3hub point: within an organization, I already trust Dockerfile that I have created or coworkers have created or or that is committed to my organization's private source code repository. You can push anything you want as a public image on Docker Hub (images only have to be approved if you want to publish an image as Docker Certified, or official image image). Supporting ENV foo=$(command output) is tricky, because we need to get something out of the container to the builder (techincally possible but..) and also, how should we handle the command run inside of $(), should we care or not of exit code, should we look at stdout, stderr, both combined, . Why --arg does not work: you can't use it in git build (say in compose): In my particular case, the env was needed to build a filename, so I did something along the lines of, RUN cat $(echo $FILENAME | awk '{ print tolower($0) }') > outputfile. Running with alpine, I have a first RUN that generates some data (with a bit of RNG). Sign up for a free GitHub account to open an issue and contact its maintainers and the community. In my case I have a multi stage docker build where a file needs to be copied from a previous container by name but that name is not constant. To be clear, in my case I want the $(hostname) to be run on the docker build host and not within the docker container image. those are the files that are sent to the daemon where the build is executed, using the context (files/folders) that it was given access to. Breaking that contract would mean that containers are no longer "containing", thus would defeat the whole purpose of running things in a container. Thanks @thaJeztah yeah that works. Thanks for the detailed reply. @jpierson-at-riis agreed, my case was similar to yours. (e.g. The text was updated successfully, but these errors were encountered: Only option I can think of would be to wrap your CMD with a script that setups up these dynamic env var for you, then runs the original CMD. (foundations for this are being worked on in BuildKit), So my guess is that this design decision prevents public repositories from hosting blatantly malicious docker images (e.g. I understand, but the Dockerfile itself should not be able to get access to the host. (another similar stackoverflow question here). This proposal might also help if you need to give the build access to different locations: #37129 to your account. I need this data in following RUN commands and later as part of my CMD. Plus, I have to add some wrapper script or glue code or whatever to make this work how I want. So to run app in container I set next CMD: where I supposed to have in variable $APP_VERSION value 1.1.0. privacy statement. these workarounds make me grumpy. Are there other reasons? Currently there are 40 upvotes for the two Stackoverflow questions. The issue there, is that those commands should not run on the host but inside the container; i.e. It seems to me that there may be room for another type of instruction in Dockerfile that is specific to calling out environment variables which should be build scoped only, in other words these would perhaps not be available at runtime to the container but are available to stages of the build. Thanks again! Stackoverflow has a question about solving this. The docker image need to setup CLASSPATH before running which can be set by CLASSPATH=$(${HADOOP_HOME}/bin/hadoop classpath --glob) instead of this long string. We really need this to simplify the content of Dockerfile. I understand constraining actions on the docker build host does provide some guarantees when building Dockerfiles but "blarg!" since user can also write ENV LET_ME_PWN_YOU $(cat ~/.ssh/id_rsa) or RUN cat ~/.ssh/id_rsa, I think it is not the serious problem. how should we handle the command run inside of $(), should we care or not of exit code, should we look at stdout, stderr, both combined, As a long-time linux user, I'd naturally assume this would follow typical unix shell behavior. There's some options though; You can implement a custom frontend (example example, which is what the experimental Dockerfile syntax does. We have to copy-and-paste this string whenever we use different version of Hadoop. avoiding Apple Store and Google Play Store issues) (which assumes docker images cannot non-obviously break out of their docker container). That's the design, and that won't change. Without it, every time I update that package I have to re-run my build a few times to get all the version numbers and subdir names correctly matching the downloaded package. Agreed that that being able to capture command output in to an ENV directive would be hugely useful, but for your particular usecase, if it is a linux based container OS, try something like: Where the file you're downloading is something like my-downloaded-thing-1.2.3.4_arm64.xyz and you want to execute a command like some_command_here my-downloaded-thing-1.2.3.4_arm64.xyz. But looks like I can set an ENV var in an earlier step and then pass that in. Because RUN uses /bin/sh -c, I cannot add that data to the usual environment variable files (eg /etc/profile) but can't use ENV either. AFAICT, this is the only solid reason for this design decision (e.g. Instructions in the Dockerfile run inside a container, not on the bare host. As part of my build I download a software package and unpack it. https://github.com/tensorflow/ecosystem/blob/master/docker/Dockerfile.hdfs#L24, make Dockerfile compatible with any Python version, App automatically builded by Travis and deployed into, Image automatically build in Docker Cloud and released in. I can however extract the name of the file in the previous stage but there is not a convenient way to set an environment variable through ENV in the Dockerfile to expose it to subsequent stages. Guys, I faced with next problem: In the CI tool I'm using I can't easily pass in a build arg as the output of a command, so I was hoping for something contained within the Dockerfile. The docker image need to setup CLASSPATH before running which can be set by CLASSPATH=$(${HADOOP_HOME}/bin/hadoop classpath --glob) instead of this long string. Is this because docker Inc. wants to have some confidence that images hosted at it's public docker hub (and other potential public hubs)? I found this issue looking for a workaround for this use-case, maybe it's useful also to others. But really, all build work is done from within the under-construction container (controlled by the resident docker daemon). @thaJeztah 's suggest is not good enough because in our scenario, the environment is only known in the docker image. Just checkout the official TensorFlow Dockerfile in https://github.com/tensorflow/ecosystem/blob/master/docker/Dockerfile.hdfs#L24 . Thanks. It has a version specified in a .txt file (actually a small collection of them); it would be very useful for me to be able to have an ENV var with that version. Just want to chime in here; this would be a very useful feature. Would a ONBUILD RUN work for you in the first stage? I can docker pull from any public repository then docker run and not worry too much about malicious code running directly on my host). Sign in Well occasionally send you account related emails. Something like BUILDENV perhaps could have a special case unlike others in that it allows for some of the new syntax for handling dynamic assignment from command output? :-(. Why I need such thing? I can docker pull from any public repository then docker run and not worry too much about malicious code running directly on my host). When I'm prepared a new release, I'm changing version in pom.xml, make right tag in git and then push to the GitHub and magic starts to happen: Also there is similar request on stackoverflow. The workaround to write the file name to a file and then read it into a variable as part of a RUN command in a later stage works however when using COPY I need an environment variable as there is no trick that I can do such as with RUN. Think of; However, you can pass the information through a --build-arg (ARG) when building an image; Or, if you want to copy information from an existing env-var on the host; @thaJeztah The --build-arg ARG ENV suggestion also works and it's less ugly than other workarounds. But the resulting *.jar must have version in name. @thaJeztah this would be helpful when installing some software and setup the root folder without knowing a specific package version: @loretoparisi In that case, I would link that varying folder to a fixed location: note to the first answer of @duglin, I had to add export or write in one line : or else it doesn't work (at least on fedora 27), I came across this issue because while adding a line to a Dockerfile, I naturally wrote, which made sense to me. Is this because docker Inc. wants to have some confidence that images hosted at it's public docker hub (and other potential public hubs)? In maven file pom.xml you set version of artifact, like this: Then in directory with pom.xml you can execute command: this command return version of artifact from pom.xml file, so as result of this command you get 1.1.0. you can't set to environment variable the result of command. You signed in with another tab or window. because we need to get something out of the container to the builder. It'd be super cool if this feature was added. So, unless you're infringing copyrights, or breaking the terms of service you should be fine. For CI/CD in maven app. @Hronom One way to fix this would be to generate a jar that doesn't not contain the version name in it. If the information you need comes from within the image and not from the host machine, you can use the following workaround: This is useful if you need to do the same operation multiple times and you don't want to repeat the command every time. @vdemeester I get it would be tricky to do, but just to make sure the expectation is set correctly, is it a "no we should not do it" or "we should do it but we need to be careful about our approach here". If I may gripe a wee bit the need to run some command on the build host still exists so the security vulnerability is just shifted. This is about the design of docker build, and the context in which Dockerfile instructions run. Have a question about this project? By clicking Sign up for GitHub, you agree to our terms of service and No, this has nothing to do with docker Inc.

Mini Long Haired Dapple Dachshund Breeder, Access S3 From Docker Container, Beagle X Dachshund For Sale, Always Standard Schnauzers,