Add support for service containers (#1949)
* Support services (#42) Removed createSimpleContainerName and AutoRemove flag Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/42 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support services options (#45) Reviewed-on: https://gitea.com/gitea/act/pulls/45 Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support intepolation for `env` of `services` (#47) Reviewed-on: https://gitea.com/gitea/act/pulls/47 Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support services `credentials` (#51) If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials). Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code:0c1f2edb99/pkg/runner/run_context.go (L228-L269)
Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/51 Reviewed-by: Jason Song <i@wolfogre.com> Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Add ContainerMaxLifetime and ContainerNetworkMode options from:b9c20dcaa4
* Fix container network issue (#56) Follow: https://gitea.com/gitea/act_runner/pulls/184 Close https://gitea.com/gitea/act_runner/issues/177 - `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty. - In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker. - If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job. - Won't try to `docker network connect ` network after `docker start` any more. - Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error. - On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved. - Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat. Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/56 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: sillyguodong <gedong_1994@163.com> Co-committed-by: sillyguodong <gedong_1994@163.com> * Check volumes (#60) This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config. Options related to volumes: - [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes) - [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes) In addition, volumes specified by `options` will also be checked. Currently, the following default volumes (seea72822b3f8/pkg/runner/run_context.go (L116-L166)
) will be added to `ValidVolumes`: - `act-toolcache` - `<container-name>` and `<container-name>-env` - `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted) Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/60 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Remove ContainerMaxLifetime; fix lint * Remove unused ValidVolumes * Remove ConnectToNetwork * Add docker stubs * Close docker clients to prevent file descriptor leaks * Fix the error when removing network in self-hosted mode (#69) Fixes https://gitea.com/gitea/act_runner/issues/255 Reviewed-on: https://gitea.com/gitea/act/pulls/69 Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Move service container and network cleanup to rc.cleanUpJobContainer * Add --network flag; default to host if not using service containers or set explicitly * Correctly close executor to prevent fd leak * Revert to tail instead of full path * fix network duplication * backport networkingConfig for aliaes * don't hardcode netMode host * Convert services test to table driven tests * Add failing tests for services * Expose service container ports onto the host * Set container network mode in artifacts server test to host mode * Log container network mode when creating/starting a container * fix: Correctly handle ContainerNetworkMode * fix: missing service container network * Always remove service containers Although we usually keep containers running if the workflow errored (unless `--rm` is given) in order to facilitate debugging and we have a flag (`--reuse`) to always keep containers running in order to speed up repeated `act` invocations, I believe that these should only apply to job containers and not service containers, because changing the network settings on a service container requires re-creating it anyway. * Remove networks only if no active endpoints exist * Ensure job containers are stopped before starting a new job * fix: go build -tags WITHOUT_DOCKER --------- Co-authored-by: Zettat123 <zettat123@gmail.com> Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Jason Song <i@wolfogre.com> Co-authored-by: sillyguodong <gedong_1994@163.com> Co-authored-by: ChristopherHX <christopher.homberger@web.de> Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
This commit is contained in:
@@ -19,6 +19,7 @@ type jobInfo interface {
|
||||
result(result string)
|
||||
}
|
||||
|
||||
//nolint:contextcheck,gocyclo
|
||||
func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executor {
|
||||
steps := make([]common.Executor, 0)
|
||||
preSteps := make([]common.Executor, 0)
|
||||
@@ -87,7 +88,7 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
|
||||
|
||||
postExec := useStepLogger(rc, stepModel, stepStagePost, step.post())
|
||||
if postExecutor != nil {
|
||||
// run the post exector in reverse order
|
||||
// run the post executor in reverse order
|
||||
postExecutor = postExec.Finally(postExecutor)
|
||||
} else {
|
||||
postExecutor = postExec
|
||||
@@ -101,7 +102,12 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
|
||||
// always allow 1 min for stopping and removing the runner, even if we were cancelled
|
||||
ctx, cancel := context.WithTimeout(common.WithLogger(context.Background(), common.Logger(ctx)), time.Minute)
|
||||
defer cancel()
|
||||
err = info.stopContainer()(ctx) //nolint:contextcheck
|
||||
|
||||
logger := common.Logger(ctx)
|
||||
logger.Infof("Cleaning up container for job %s", rc.JobName)
|
||||
if err = info.stopContainer()(ctx); err != nil {
|
||||
logger.Errorf("Error while stop job container: %v", err)
|
||||
}
|
||||
}
|
||||
setJobResult(ctx, info, rc, jobError == nil)
|
||||
setJobOutputs(ctx, rc)
|
||||
|
@@ -17,12 +17,12 @@ import (
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/opencontainers/selinux/go-selinux"
|
||||
|
||||
"github.com/docker/go-connections/nat"
|
||||
"github.com/nektos/act/pkg/common"
|
||||
"github.com/nektos/act/pkg/container"
|
||||
"github.com/nektos/act/pkg/exprparser"
|
||||
"github.com/nektos/act/pkg/model"
|
||||
"github.com/opencontainers/selinux/go-selinux"
|
||||
)
|
||||
|
||||
// RunContext contains info about current job
|
||||
@@ -40,6 +40,7 @@ type RunContext struct {
|
||||
IntraActionState map[string]map[string]string
|
||||
ExprEval ExpressionEvaluator
|
||||
JobContainer container.ExecutionsEnvironment
|
||||
ServiceContainers []container.ExecutionsEnvironment
|
||||
OutputMappings map[MappableOutput]MappableOutput
|
||||
JobName string
|
||||
ActionPath string
|
||||
@@ -87,6 +88,18 @@ func (rc *RunContext) jobContainerName() string {
|
||||
return createContainerName("act", rc.String())
|
||||
}
|
||||
|
||||
// networkName return the name of the network which will be created by `act` automatically for job,
|
||||
// only create network if using a service container
|
||||
func (rc *RunContext) networkName() (string, bool) {
|
||||
if len(rc.Run.Job().Services) > 0 {
|
||||
return fmt.Sprintf("%s-%s-network", rc.jobContainerName(), rc.Run.JobID), true
|
||||
}
|
||||
if rc.Config.ContainerNetworkMode == "" {
|
||||
return "host", false
|
||||
}
|
||||
return string(rc.Config.ContainerNetworkMode), false
|
||||
}
|
||||
|
||||
func getDockerDaemonSocketMountPath(daemonPath string) string {
|
||||
if protoIndex := strings.Index(daemonPath, "://"); protoIndex != -1 {
|
||||
scheme := daemonPath[:protoIndex]
|
||||
@@ -226,6 +239,7 @@ func (rc *RunContext) startHostEnvironment() common.Executor {
|
||||
}
|
||||
}
|
||||
|
||||
//nolint:gocyclo
|
||||
func (rc *RunContext) startJobContainer() common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
logger := common.Logger(ctx)
|
||||
@@ -259,41 +273,126 @@ func (rc *RunContext) startJobContainer() common.Executor {
|
||||
ext := container.LinuxContainerEnvironmentExtensions{}
|
||||
binds, mounts := rc.GetBindsAndMounts()
|
||||
|
||||
// specify the network to which the container will connect when `docker create` stage. (like execute command line: docker create --network <networkName> <image>)
|
||||
// if using service containers, will create a new network for the containers.
|
||||
// and it will be removed after at last.
|
||||
networkName, createAndDeleteNetwork := rc.networkName()
|
||||
|
||||
// add service containers
|
||||
for serviceID, spec := range rc.Run.Job().Services {
|
||||
// interpolate env
|
||||
interpolatedEnvs := make(map[string]string, len(spec.Env))
|
||||
for k, v := range spec.Env {
|
||||
interpolatedEnvs[k] = rc.ExprEval.Interpolate(ctx, v)
|
||||
}
|
||||
envs := make([]string, 0, len(interpolatedEnvs))
|
||||
for k, v := range interpolatedEnvs {
|
||||
envs = append(envs, fmt.Sprintf("%s=%s", k, v))
|
||||
}
|
||||
username, password, err = rc.handleServiceCredentials(ctx, spec.Credentials)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to handle service %s credentials: %w", serviceID, err)
|
||||
}
|
||||
serviceBinds, serviceMounts := rc.GetServiceBindsAndMounts(spec.Volumes)
|
||||
|
||||
exposedPorts, portBindings, err := nat.ParsePortSpecs(spec.Ports)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse service %s ports: %w", serviceID, err)
|
||||
}
|
||||
|
||||
serviceContainerName := createContainerName(rc.jobContainerName(), serviceID)
|
||||
c := container.NewContainer(&container.NewContainerInput{
|
||||
Name: serviceContainerName,
|
||||
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
|
||||
Image: spec.Image,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Env: envs,
|
||||
Mounts: serviceMounts,
|
||||
Binds: serviceBinds,
|
||||
Stdout: logWriter,
|
||||
Stderr: logWriter,
|
||||
Privileged: rc.Config.Privileged,
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
Options: spec.Options,
|
||||
NetworkMode: networkName,
|
||||
NetworkAliases: []string{serviceID},
|
||||
ExposedPorts: exposedPorts,
|
||||
PortBindings: portBindings,
|
||||
})
|
||||
rc.ServiceContainers = append(rc.ServiceContainers, c)
|
||||
}
|
||||
|
||||
rc.cleanUpJobContainer = func(ctx context.Context) error {
|
||||
if rc.JobContainer != nil && !rc.Config.ReuseContainers {
|
||||
return rc.JobContainer.Remove().
|
||||
Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName(), false)).
|
||||
Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName()+"-env", false))(ctx)
|
||||
reuseJobContainer := func(ctx context.Context) bool {
|
||||
return rc.Config.ReuseContainers
|
||||
}
|
||||
|
||||
if rc.JobContainer != nil {
|
||||
return rc.JobContainer.Remove().IfNot(reuseJobContainer).
|
||||
Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName(), false)).IfNot(reuseJobContainer).
|
||||
Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName()+"-env", false)).IfNot(reuseJobContainer).
|
||||
Then(func(ctx context.Context) error {
|
||||
if len(rc.ServiceContainers) > 0 {
|
||||
logger.Infof("Cleaning up services for job %s", rc.JobName)
|
||||
if err := rc.stopServiceContainers()(ctx); err != nil {
|
||||
logger.Errorf("Error while cleaning services: %v", err)
|
||||
}
|
||||
if createAndDeleteNetwork {
|
||||
// clean network if it has been created by act
|
||||
// if using service containers
|
||||
// it means that the network to which containers are connecting is created by `act_runner`,
|
||||
// so, we should remove the network at last.
|
||||
logger.Infof("Cleaning up network for job %s, and network name is: %s", rc.JobName, networkName)
|
||||
if err := container.NewDockerNetworkRemoveExecutor(networkName)(ctx); err != nil {
|
||||
logger.Errorf("Error while cleaning network: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
jobContainerNetwork := rc.Config.ContainerNetworkMode.NetworkName()
|
||||
if rc.containerImage(ctx) != "" {
|
||||
jobContainerNetwork = networkName
|
||||
} else if jobContainerNetwork == "" {
|
||||
jobContainerNetwork = "host"
|
||||
}
|
||||
|
||||
rc.JobContainer = container.NewContainer(&container.NewContainerInput{
|
||||
Cmd: nil,
|
||||
Entrypoint: []string{"tail", "-f", "/dev/null"},
|
||||
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
|
||||
Image: image,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Name: name,
|
||||
Env: envList,
|
||||
Mounts: mounts,
|
||||
NetworkMode: "host",
|
||||
Binds: binds,
|
||||
Stdout: logWriter,
|
||||
Stderr: logWriter,
|
||||
Privileged: rc.Config.Privileged,
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
Options: rc.options(ctx),
|
||||
Cmd: nil,
|
||||
Entrypoint: []string{"tail", "-f", "/dev/null"},
|
||||
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
|
||||
Image: image,
|
||||
Username: username,
|
||||
Password: password,
|
||||
Name: name,
|
||||
Env: envList,
|
||||
Mounts: mounts,
|
||||
NetworkMode: jobContainerNetwork,
|
||||
NetworkAliases: []string{rc.Name},
|
||||
Binds: binds,
|
||||
Stdout: logWriter,
|
||||
Stderr: logWriter,
|
||||
Privileged: rc.Config.Privileged,
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
Options: rc.options(ctx),
|
||||
})
|
||||
if rc.JobContainer == nil {
|
||||
return errors.New("Failed to create job container")
|
||||
}
|
||||
|
||||
return common.NewPipelineExecutor(
|
||||
rc.pullServicesImages(rc.Config.ForcePull),
|
||||
rc.JobContainer.Pull(rc.Config.ForcePull),
|
||||
rc.stopJobContainer(),
|
||||
container.NewDockerNetworkCreateExecutor(networkName).IfBool(createAndDeleteNetwork),
|
||||
rc.startServiceContainers(networkName),
|
||||
rc.JobContainer.Create(rc.Config.ContainerCapAdd, rc.Config.ContainerCapDrop),
|
||||
rc.JobContainer.Start(false),
|
||||
rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{
|
||||
@@ -369,16 +468,50 @@ func (rc *RunContext) UpdateExtraPath(ctx context.Context, githubEnvPath string)
|
||||
return nil
|
||||
}
|
||||
|
||||
// stopJobContainer removes the job container (if it exists) and its volume (if it exists) if !rc.Config.ReuseContainers
|
||||
// stopJobContainer removes the job container (if it exists) and its volume (if it exists)
|
||||
func (rc *RunContext) stopJobContainer() common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
if rc.cleanUpJobContainer != nil && !rc.Config.ReuseContainers {
|
||||
if rc.cleanUpJobContainer != nil {
|
||||
return rc.cleanUpJobContainer(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (rc *RunContext) pullServicesImages(forcePull bool) common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
execs := []common.Executor{}
|
||||
for _, c := range rc.ServiceContainers {
|
||||
execs = append(execs, c.Pull(forcePull))
|
||||
}
|
||||
return common.NewParallelExecutor(len(execs), execs...)(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
func (rc *RunContext) startServiceContainers(_ string) common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
execs := []common.Executor{}
|
||||
for _, c := range rc.ServiceContainers {
|
||||
execs = append(execs, common.NewPipelineExecutor(
|
||||
c.Pull(false),
|
||||
c.Create(rc.Config.ContainerCapAdd, rc.Config.ContainerCapDrop),
|
||||
c.Start(false),
|
||||
))
|
||||
}
|
||||
return common.NewParallelExecutor(len(execs), execs...)(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
func (rc *RunContext) stopServiceContainers() common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
execs := []common.Executor{}
|
||||
for _, c := range rc.ServiceContainers {
|
||||
execs = append(execs, c.Remove().Finally(c.Close()))
|
||||
}
|
||||
return common.NewParallelExecutor(len(execs), execs...)(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
// Prepare the mounts and binds for the worker
|
||||
|
||||
// ActionCacheDir is for rc
|
||||
@@ -853,3 +986,53 @@ func (rc *RunContext) handleCredentials(ctx context.Context) (string, string, er
|
||||
|
||||
return username, password, nil
|
||||
}
|
||||
|
||||
func (rc *RunContext) handleServiceCredentials(ctx context.Context, creds map[string]string) (username, password string, err error) {
|
||||
if creds == nil {
|
||||
return
|
||||
}
|
||||
if len(creds) != 2 {
|
||||
err = fmt.Errorf("invalid property count for key 'credentials:'")
|
||||
return
|
||||
}
|
||||
|
||||
ee := rc.NewExpressionEvaluator(ctx)
|
||||
if username = ee.Interpolate(ctx, creds["username"]); username == "" {
|
||||
err = fmt.Errorf("failed to interpolate credentials.username")
|
||||
return
|
||||
}
|
||||
|
||||
if password = ee.Interpolate(ctx, creds["password"]); password == "" {
|
||||
err = fmt.Errorf("failed to interpolate credentials.password")
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// GetServiceBindsAndMounts returns the binds and mounts for the service container, resolving paths as appopriate
|
||||
func (rc *RunContext) GetServiceBindsAndMounts(svcVolumes []string) ([]string, map[string]string) {
|
||||
if rc.Config.ContainerDaemonSocket == "" {
|
||||
rc.Config.ContainerDaemonSocket = "/var/run/docker.sock"
|
||||
}
|
||||
binds := []string{}
|
||||
if rc.Config.ContainerDaemonSocket != "-" {
|
||||
daemonPath := getDockerDaemonSocketMountPath(rc.Config.ContainerDaemonSocket)
|
||||
binds = append(binds, fmt.Sprintf("%s:%s", daemonPath, "/var/run/docker.sock"))
|
||||
}
|
||||
|
||||
mounts := map[string]string{}
|
||||
|
||||
for _, v := range svcVolumes {
|
||||
if !strings.Contains(v, ":") || filepath.IsAbs(v) {
|
||||
// Bind anonymous volume or host file.
|
||||
binds = append(binds, v)
|
||||
} else {
|
||||
// Mount existing volume.
|
||||
paths := strings.SplitN(v, ":", 2)
|
||||
mounts[paths[0]] = paths[1]
|
||||
}
|
||||
}
|
||||
|
||||
return binds, mounts
|
||||
}
|
||||
|
@@ -7,10 +7,10 @@ import (
|
||||
"os"
|
||||
"runtime"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
docker_container "github.com/docker/docker/api/types/container"
|
||||
"github.com/nektos/act/pkg/common"
|
||||
"github.com/nektos/act/pkg/model"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// Runner provides capabilities to run GitHub actions
|
||||
@@ -20,44 +20,45 @@ type Runner interface {
|
||||
|
||||
// Config contains the config for a new runner
|
||||
type Config struct {
|
||||
Actor string // the user that triggered the event
|
||||
Workdir string // path to working directory
|
||||
ActionCacheDir string // path used for caching action contents
|
||||
BindWorkdir bool // bind the workdir to the job container
|
||||
EventName string // name of event to run
|
||||
EventPath string // path to JSON file to use for event.json in containers
|
||||
DefaultBranch string // name of the main branch for this repository
|
||||
ReuseContainers bool // reuse containers to maintain state
|
||||
ForcePull bool // force pulling of the image, even if already present
|
||||
ForceRebuild bool // force rebuilding local docker image action
|
||||
LogOutput bool // log the output from docker run
|
||||
JSONLogger bool // use json or text logger
|
||||
LogPrefixJobID bool // switches from the full job name to the job id
|
||||
Env map[string]string // env for containers
|
||||
Inputs map[string]string // manually passed action inputs
|
||||
Secrets map[string]string // list of secrets
|
||||
Vars map[string]string // list of vars
|
||||
Token string // GitHub token
|
||||
InsecureSecrets bool // switch hiding output when printing to terminal
|
||||
Platforms map[string]string // list of platforms
|
||||
Privileged bool // use privileged mode
|
||||
UsernsMode string // user namespace to use
|
||||
ContainerArchitecture string // Desired OS/architecture platform for running containers
|
||||
ContainerDaemonSocket string // Path to Docker daemon socket
|
||||
ContainerOptions string // Options for the job container
|
||||
UseGitIgnore bool // controls if paths in .gitignore should not be copied into container, default true
|
||||
GitHubInstance string // GitHub instance to use, default "github.com"
|
||||
ContainerCapAdd []string // list of kernel capabilities to add to the containers
|
||||
ContainerCapDrop []string // list of kernel capabilities to remove from the containers
|
||||
AutoRemove bool // controls if the container is automatically removed upon workflow completion
|
||||
ArtifactServerPath string // the path where the artifact server stores uploads
|
||||
ArtifactServerAddr string // the address the artifact server binds to
|
||||
ArtifactServerPort string // the port the artifact server binds to
|
||||
NoSkipCheckout bool // do not skip actions/checkout
|
||||
RemoteName string // remote name in local git repo config
|
||||
ReplaceGheActionWithGithubCom []string // Use actions from GitHub Enterprise instance to GitHub
|
||||
ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub.
|
||||
Matrix map[string]map[string]bool // Matrix config to run
|
||||
Actor string // the user that triggered the event
|
||||
Workdir string // path to working directory
|
||||
ActionCacheDir string // path used for caching action contents
|
||||
BindWorkdir bool // bind the workdir to the job container
|
||||
EventName string // name of event to run
|
||||
EventPath string // path to JSON file to use for event.json in containers
|
||||
DefaultBranch string // name of the main branch for this repository
|
||||
ReuseContainers bool // reuse containers to maintain state
|
||||
ForcePull bool // force pulling of the image, even if already present
|
||||
ForceRebuild bool // force rebuilding local docker image action
|
||||
LogOutput bool // log the output from docker run
|
||||
JSONLogger bool // use json or text logger
|
||||
LogPrefixJobID bool // switches from the full job name to the job id
|
||||
Env map[string]string // env for containers
|
||||
Inputs map[string]string // manually passed action inputs
|
||||
Secrets map[string]string // list of secrets
|
||||
Vars map[string]string // list of vars
|
||||
Token string // GitHub token
|
||||
InsecureSecrets bool // switch hiding output when printing to terminal
|
||||
Platforms map[string]string // list of platforms
|
||||
Privileged bool // use privileged mode
|
||||
UsernsMode string // user namespace to use
|
||||
ContainerArchitecture string // Desired OS/architecture platform for running containers
|
||||
ContainerDaemonSocket string // Path to Docker daemon socket
|
||||
ContainerOptions string // Options for the job container
|
||||
UseGitIgnore bool // controls if paths in .gitignore should not be copied into container, default true
|
||||
GitHubInstance string // GitHub instance to use, default "github.com"
|
||||
ContainerCapAdd []string // list of kernel capabilities to add to the containers
|
||||
ContainerCapDrop []string // list of kernel capabilities to remove from the containers
|
||||
AutoRemove bool // controls if the container is automatically removed upon workflow completion
|
||||
ArtifactServerPath string // the path where the artifact server stores uploads
|
||||
ArtifactServerAddr string // the address the artifact server binds to
|
||||
ArtifactServerPort string // the port the artifact server binds to
|
||||
NoSkipCheckout bool // do not skip actions/checkout
|
||||
RemoteName string // remote name in local git repo config
|
||||
ReplaceGheActionWithGithubCom []string // Use actions from GitHub Enterprise instance to GitHub
|
||||
ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub.
|
||||
Matrix map[string]map[string]bool // Matrix config to run
|
||||
ContainerNetworkMode docker_container.NetworkMode // the network mode of job containers (the value of --network)
|
||||
}
|
||||
|
||||
type caller struct {
|
||||
|
@@ -302,6 +302,11 @@ func TestRunEvent(t *testing.T) {
|
||||
{workdir, "set-env-step-env-override", "push", "", platforms, secrets},
|
||||
{workdir, "set-env-new-env-file-per-step", "push", "", platforms, secrets},
|
||||
{workdir, "no-panic-on-invalid-composite-action", "push", "jobs failed due to invalid action", platforms, secrets},
|
||||
|
||||
// services
|
||||
{workdir, "services", "push", "", platforms, secrets},
|
||||
{workdir, "services-host-network", "push", "", platforms, secrets},
|
||||
{workdir, "services-with-container", "push", "", platforms, secrets},
|
||||
}
|
||||
|
||||
for _, table := range tables {
|
||||
|
14
pkg/runner/testdata/services-host-network/push.yml
vendored
Normal file
14
pkg/runner/testdata/services-host-network/push.yml
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
name: services-host-network
|
||||
on: push
|
||||
jobs:
|
||||
services-host-network:
|
||||
runs-on: ubuntu-latest
|
||||
services:
|
||||
nginx:
|
||||
image: "nginx:latest"
|
||||
ports:
|
||||
- "8080:80"
|
||||
steps:
|
||||
- run: apt-get -qq update && apt-get -yqq install --no-install-recommends curl net-tools
|
||||
- run: netstat -tlpen
|
||||
- run: curl -v http://localhost:8080
|
16
pkg/runner/testdata/services-with-container/push.yml
vendored
Normal file
16
pkg/runner/testdata/services-with-container/push.yml
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
name: services-with-containers
|
||||
on: push
|
||||
jobs:
|
||||
services-with-containers:
|
||||
runs-on: ubuntu-latest
|
||||
# https://docs.github.com/en/actions/using-containerized-services/about-service-containers#running-jobs-in-a-container
|
||||
container:
|
||||
image: "ubuntu:latest"
|
||||
services:
|
||||
nginx:
|
||||
image: "nginx:latest"
|
||||
ports:
|
||||
- "8080:80"
|
||||
steps:
|
||||
- run: apt-get -qq update && apt-get -yqq install --no-install-recommends curl
|
||||
- run: curl -v http://nginx:80
|
26
pkg/runner/testdata/services/push.yaml
vendored
Normal file
26
pkg/runner/testdata/services/push.yaml
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
name: services
|
||||
on: push
|
||||
jobs:
|
||||
services:
|
||||
name: Reproduction of failing Services interpolation
|
||||
runs-on: ubuntu-latest
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:12
|
||||
env:
|
||||
POSTGRES_USER: runner
|
||||
POSTGRES_PASSWORD: mysecretdbpass
|
||||
POSTGRES_DB: mydb
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
ports:
|
||||
- 5432:5432
|
||||
steps:
|
||||
- name: Echo the Postgres service ID / Network / Ports
|
||||
run: |
|
||||
echo "id: ${{ job.services.postgres.id }}"
|
||||
echo "network: ${{ job.services.postgres.network }}"
|
||||
echo "ports: ${{ job.services.postgres.ports }}"
|
Reference in New Issue
Block a user