Compare commits

..

30 Commits

Author SHA1 Message Date
Jason Song
a8298365fe Fix incompatibility caused by tracking upstream add actions to test it (#24)
Reviewed-on: https://gitea.com/gitea/act/pulls/24
2023-03-16 15:00:11 +08:00
Jason Song
1dda0aec69 Merge tag 'nektos/v0.2.43'
Conflicts:
	pkg/container/docker_run.go
	pkg/runner/action.go
	pkg/runner/logger.go
	pkg/runner/run_context.go
	pkg/runner/runner.go
	pkg/runner/step_action_remote_test.go
2023-03-16 11:45:29 +08:00
Jason Song
49e204166d Update forking fules (#23)
As the title.

Reviewed-on: https://gitea.com/gitea/act/pulls/23
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-16 10:46:41 +08:00
Zettat123
a36b003f7a Improve running with go (#22)
Close #21

I have tested this PR and run Go actions successfully on:
- Windows host
- Docker on Windows
- Linux host
- Docker on Linux

Before running Go actions, we need to make sure that Go has been installed on the host or the Docker image.

Reviewed-on: https://gitea.com/gitea/act/pulls/22
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-03-14 16:55:36 +08:00
Zettat123
0671d16694 Fix missing ActionRunsUsingGo (#20)
- Allow `using: "go"` when unmarshalling YAML.
- Add `ActionRunsUsingGo` to returned errors.

Co-authored-by: Zettat123 <zettat123@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/20
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@noreply.gitea.io>
Co-committed-by: Zettat123 <zettat123@noreply.gitea.io>
2023-03-09 22:51:58 +08:00
a1012112796
881dbdb81b make log level configable (#19)
relatd: https://gitea.com/gitea/act_runner/pulls/39
Reviewed-on: https://gitea.com/gitea/act/pulls/19
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: a1012112796 <1012112796@qq.com>
Co-committed-by: a1012112796 <1012112796@qq.com>
2023-03-08 14:46:39 +08:00
Jason Song
1252e551b8 Replace more strings.ReplaceAll to safeFilename (#18)
Follow #16 #17

Reviewed-on: https://gitea.com/gitea/act/pulls/18
2023-02-24 14:20:34 +08:00
Jason Song
c614d8b96c Replace more strings.ReplaceAll to safeFilename (#17)
Follow #16.

Reviewed-on: https://gitea.com/gitea/act/pulls/17
2023-02-24 12:11:30 +08:00
Jason Song
84b6649b8b Safe filename (#16)
Fix #15.

Reviewed-on: https://gitea.com/gitea/act/pulls/16
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-02-24 10:17:36 +08:00
Jason Song
dca7801682 Support uses http(s)://host/owner/repo as actions (#14)
Examples:

```yaml
jobs:
  my_first_job:
    steps:
      - name: My first step
        uses: https://gitea.com/actions/heroku@main
      - name: My second step
        uses: http://example.com/actions/heroku@v2.0.1
```

Reviewed-on: https://gitea.com/gitea/act/pulls/14
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-02-15 16:28:33 +08:00
Lunny Xiao
4b99ed8916 Support go run on action (#12)
Reviewed-on: https://gitea.com/gitea/act/pulls/12
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-committed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-15 16:10:15 +08:00
Lunny Xiao
e46ede1b17 parse raw on (#11)
Reviewed-on: https://gitea.com/gitea/act/pulls/11
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-committed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-01-31 15:49:55 +08:00
Jason Song
1ba076d321 Erase needs of job in SingleWorkflow (#9)
Reviewed-on: https://gitea.com/gitea/act/pulls/9
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-01-30 11:42:19 +08:00
appleboy
0efa2d5e63 fix(test): needs condition. (#8)
as title.

Signed-off-by: Bo-Yi.Wu <appleboy.tw@gmail.com>

Co-authored-by: Bo-Yi.Wu <appleboy.tw@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/8
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-01-21 17:09:51 +08:00
Jason Song
0a37a03f2e Clone actions without token (#6)
Shouldn't provide token when cloning actions, the token comes from the instance which triggered the task, it might be not the instance which provides actions.

For GitHub, they are the same, always github.com. But for Gitea, tasks triggered by a.com can clone actions from b.com.

Reviewed-on: https://gitea.com/gitea/act/pulls/6
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-01-06 13:34:38 +08:00
appleboy
88cce47022 feat(workflow): support schedule event (#4)
fix https://gitea.com/gitea/act/issues/3

Signed-off-by: Bo-Yi.Wu <appleboy.tw@gmail.com>

Co-authored-by: Bo-Yi.Wu <appleboy.tw@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/4
2022-12-10 09:14:14 +08:00
Jason Song
7920109e89 Merge tag 'nektos/v0.2.34' 2022-12-05 17:08:17 +08:00
Jason Song
4cacc14d22 feat: adjust container name format (#1)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/1
2022-11-24 14:45:32 +08:00
Jason Song
c6b8548d35 feat: support PlatformPicker 2022-11-22 16:39:19 +08:00
Jason Song
64cae197a4 Support step number 2022-11-22 16:11:35 +08:00
Jason Song
7fb84a54a8 chore: update LICENSE 2022-11-22 15:26:02 +08:00
Jason Song
70cc6c017b docs: add naming rule for git ref 2022-11-22 15:05:12 +08:00
Lunny Xiao
d7e9ea75fc disable graphql url because gitea doesn't support that 2022-11-22 14:42:48 +08:00
Jason Song
b9c20dcaa4 feat: support more options of containers 2022-11-22 14:42:12 +08:00
Jason Song
97629ae8af fix: set logger with trace level 2022-11-22 14:41:57 +08:00
Lunny Xiao
b9a9812ad9 Fix API 2022-11-22 14:22:03 +08:00
Lunny Xiao
113c3e98fb support bot site 2022-11-22 14:17:06 +08:00
Jason Song
7815eec33b Add custom enhancements 2022-11-22 14:16:35 +08:00
Jason Song
c051090583 Add description of branchs 2022-11-22 14:02:01 +08:00
fuxiaohei
0fa1fe0310 feat: add logger hook for standalone job logger 2022-11-22 14:00:13 +08:00
69 changed files with 1936 additions and 1988 deletions

44
.gitea/workflows/test.yml Normal file
View File

@@ -0,0 +1,44 @@
name: checks
on:
- push
- pull_request
env:
GOPROXY: https://goproxy.io,direct
GOPATH: /go_path
GOCACHE: /go_cache
jobs:
lint:
name: check and test
runs-on: ubuntu-latest
steps:
- name: cache go path
id: cache-go-path
uses: https://github.com/actions/cache@v3
with:
path: /go_path
key: go_path-${{ github.repository }}-${{ github.ref_name }}
restore-keys: |
go_path-${{ github.repository }}-
go_path-
- name: cache go cache
id: cache-go-cache
uses: https://github.com/actions/cache@v3
with:
path: /go_cache
key: go_cache-${{ github.repository }}-${{ github.ref_name }}
restore-keys: |
go_cache-${{ github.repository }}-
go_cache-
- uses: actions/setup-go@v3
with:
go-version: 1.20
- uses: actions/checkout@v3
- name: vet checks
run: go vet -v ./...
- name: build
run: go build -v ./...
- name: test
run: go test -v ./pkg/jobparser
# TODO test more packages

View File

@@ -1,10 +1,6 @@
name: checks
on: [pull_request, workflow_dispatch]
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.ref }}
env:
ACT_OWNER: ${{ github.repository_owner }}
ACT_REPOSITORY: ${{ github.repository }}
@@ -19,14 +15,14 @@ jobs:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/setup-go@v4
- uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
check-latest: true
- uses: golangci/golangci-lint-action@v3.4.0
with:
version: v1.47.2
- uses: megalinter/megalinter/flavors/go@v6.22.2
- uses: megalinter/megalinter/flavors/go@v6.20.0
env:
DEFAULT_BRANCH: master
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -43,7 +39,7 @@ jobs:
fetch-depth: 2
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- uses: actions/setup-go@v4
- uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
check-latest: true
@@ -58,10 +54,8 @@ jobs:
uses: ./.github/actions/run-tests
with:
upload-logs-name: logs-linux
- name: Run act from cli
run: go run main.go -P ubuntu-latest=node:16-buster-slim -C ./pkg/runner/testdata/ -W ./basic/push.yml
- name: Upload Codecov report
uses: codecov/codecov-action@v3.1.3
uses: codecov/codecov-action@v3.1.1
with:
files: coverage.txt
fail_ci_if_error: true # optional (default = false)
@@ -78,7 +72,7 @@ jobs:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: actions/setup-go@v4
- uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
check-latest: true
@@ -93,7 +87,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
- uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
check-latest: true

View File

@@ -17,8 +17,8 @@ jobs:
fetch-depth: 0
ref: master
token: ${{ secrets.GORELEASER_GITHUB_TOKEN }}
- uses: fregante/setup-git-user@v2
- uses: actions/setup-go@v4
- uses: fregante/setup-git-user@v1
- uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
check-latest: true

View File

@@ -15,7 +15,7 @@ jobs:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/setup-go@v4
- uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
check-latest: true

View File

@@ -8,7 +8,7 @@ jobs:
name: Stale
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v8
- uses: actions/stale@v7
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'Issue is stale and will be closed in 14 days unless there is new activity'

1
.gitignore vendored
View File

@@ -31,3 +31,4 @@ coverage.txt
# megalinter
report/
act

View File

@@ -1,5 +1,6 @@
MIT License
Copyright (c) 2022 The Gitea Authors
Copyright (c) 2019
Permission is hereby granted, free of charge, to any person obtaining a copy

View File

@@ -1,3 +1,28 @@
## Forking rules
This is a custom fork of [nektos/act](https://github.com/nektos/act/), for the purpose of serving [act_runner](https://gitea.com/gitea/act_runner).
It cannot be used as command line tool anymore, but only as a library.
It's a soft fork, which means that it will tracking the latest release of nektos/act.
Branches:
- `main`: default branch, contains custom changes, based on the latest release(not the latest of the master branch) of nektos/act.
- `nektos/master`: mirror for the master branch of nektos/act.
Tags:
- `nektos/vX.Y.Z`: mirror for `vX.Y.Z` of [nektos/act](https://github.com/nektos/act/).
- `vX.YZ.*`: based on `nektos/vX.Y.Z`, contains custom changes.
- Examples:
- `nektos/v0.2.23` -> `v0.223.*`
- `nektos/v0.3.1` -> `v0.301.*`, not ~~`v0.31.*`~~
- `nektos/v0.10.1` -> `v0.1001.*`, not ~~`v0.101.*`~~
- `nektos/v0.3.100` -> not ~~`v0.3100.*`~~, I don't think it's really going to happen, if it does, we can find a way to handle it.
---
![act-logo](https://github.com/nektos/act/wiki/img/logo-150.png)
# Overview [![push](https://github.com/nektos/act/workflows/push/badge.svg?branch=master&event=push)](https://github.com/nektos/act/actions) [![Join the chat at https://gitter.im/nektos/act](https://badges.gitter.im/nektos/act.svg)](https://gitter.im/nektos/act?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Go Report Card](https://goreportcard.com/badge/github.com/nektos/act)](https://goreportcard.com/report/github.com/nektos/act) [![awesome-runners](https://img.shields.io/badge/listed%20on-awesome--runners-blue.svg)](https://github.com/jonico/awesome-runners)
@@ -121,7 +146,7 @@ nix run nixpkgs#act
Act can be installed as a [GitHub CLI](https://cli.github.com/) extension:
```sh
gh extension install https://github.com/nektos/gh-act
gh extension install nektos/gh-act
```
## Other install options
@@ -164,9 +189,6 @@ act pull_request
# Run a specific job:
act -j test
# Collect artifacts to the /tmp/artifacts folder:
act --artifact-server-path /tmp/artifacts
# Run a job in a specific workflow (useful if you have duplicate job names)
act -j lint -W .github/workflows/checks.yml

View File

@@ -1 +1 @@
0.2.45
0.2.43

View File

@@ -1,27 +0,0 @@
package cmd
import (
"os"
"path/filepath"
log "github.com/sirupsen/logrus"
)
var (
UserHomeDir string
CacheHomeDir string
)
func init() {
home, err := os.UserHomeDir()
if err != nil {
log.Fatal(err)
}
UserHomeDir = home
if v := os.Getenv("XDG_CACHE_HOME"); v != "" {
CacheHomeDir = v
} else {
CacheHomeDir = filepath.Join(UserHomeDir, ".cache")
}
}

View File

@@ -42,16 +42,11 @@ type Input struct {
artifactServerPath string
artifactServerAddr string
artifactServerPort string
noCacheServer bool
cacheServerPath string
cacheServerAddr string
cacheServerPort uint16
jsonLogger bool
noSkipCheckout bool
remoteName string
replaceGheActionWithGithubCom []string
replaceGheActionTokenWithGithubCom string
matrix []string
}
func (i *Input) resolve(path string) string {

View File

@@ -11,6 +11,7 @@ import (
"strings"
"time"
"github.com/mitchellh/go-homedir"
log "github.com/sirupsen/logrus"
)
@@ -132,7 +133,16 @@ func saveNoticesEtag(etag string) {
}
func etagPath() string {
dir := filepath.Join(CacheHomeDir, "act")
var xdgCache string
var ok bool
if xdgCache, ok = os.LookupEnv("XDG_CACHE_HOME"); !ok || xdgCache == "" {
if home, err := homedir.Dir(); err == nil {
xdgCache = filepath.Join(home, ".cache")
} else if xdgCache, err = filepath.Abs("."); err != nil {
log.Fatal(err)
}
}
dir := filepath.Join(xdgCache, "act")
if err := os.MkdirAll(dir, 0o777); err != nil {
log.Fatal(err)
}

View File

@@ -15,12 +15,11 @@ import (
"github.com/adrg/xdg"
"github.com/andreaskoch/go-fswatch"
"github.com/joho/godotenv"
"github.com/mitchellh/go-homedir"
gitignore "github.com/sabhiram/go-gitignore"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gopkg.in/yaml.v3"
"github.com/nektos/act/pkg/artifactcache"
"github.com/nektos/act/pkg/artifacts"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/container"
@@ -67,7 +66,6 @@ func Execute(ctx context.Context, version string) {
rootCmd.Flags().BoolVar(&input.autoRemove, "rm", false, "automatically remove container(s)/volume(s) after a workflow(s) failure")
rootCmd.Flags().StringArrayVarP(&input.replaceGheActionWithGithubCom, "replace-ghe-action-with-github-com", "", []string{}, "If you are using GitHub Enterprise Server and allow specified actions from GitHub (github.com), you can set actions on this. (e.g. --replace-ghe-action-with-github-com =github/super-linter)")
rootCmd.Flags().StringVar(&input.replaceGheActionTokenWithGithubCom, "replace-ghe-action-token-with-github-com", "", "If you are using replace-ghe-action-with-github-com and you want to use private actions on GitHub, you have to set personal access token")
rootCmd.Flags().StringArrayVarP(&input.matrix, "matrix", "", []string{}, "specify which matrix configuration to include (e.g. --matrix java:13")
rootCmd.PersistentFlags().StringVarP(&input.actor, "actor", "a", "nektos/act", "user that triggered the event")
rootCmd.PersistentFlags().StringVarP(&input.workflowsPath, "workflows", "W", "./.github/workflows/", "path to workflow file(s)")
rootCmd.PersistentFlags().BoolVarP(&input.noWorkflowRecurse, "no-recurse", "", false, "Flag to disable running workflows from subdirectories of specified path in '--workflows'/'-W' flag")
@@ -81,17 +79,13 @@ func Execute(ctx context.Context, version string) {
rootCmd.PersistentFlags().StringVarP(&input.envfile, "env-file", "", ".env", "environment file to read and use as env in the containers")
rootCmd.PersistentFlags().StringVarP(&input.inputfile, "input-file", "", ".input", "input file to read and use as action input")
rootCmd.PersistentFlags().StringVarP(&input.containerArchitecture, "container-architecture", "", "", "Architecture which should be used to run containers, e.g.: linux/amd64. If not specified, will use host default architecture. Requires Docker server API Version 1.41+. Ignored on earlier Docker server platforms.")
rootCmd.PersistentFlags().StringVarP(&input.containerDaemonSocket, "container-daemon-socket", "", "", "URI to Docker Engine socket (e.g.: unix://~/.docker/run/docker.sock or - to disable bind mounting the socket)")
rootCmd.PersistentFlags().StringVarP(&input.containerDaemonSocket, "container-daemon-socket", "", "/var/run/docker.sock", "Path to Docker daemon socket which will be mounted to containers")
rootCmd.PersistentFlags().StringVarP(&input.containerOptions, "container-options", "", "", "Custom docker container options for the job container without an options property in the job definition")
rootCmd.PersistentFlags().StringVarP(&input.githubInstance, "github-instance", "", "github.com", "GitHub instance to use. Don't use this if you are not using GitHub Enterprise Server.")
rootCmd.PersistentFlags().StringVarP(&input.artifactServerPath, "artifact-server-path", "", "", "Defines the path where the artifact server stores uploads and retrieves downloads from. If not specified the artifact server will not start.")
rootCmd.PersistentFlags().StringVarP(&input.artifactServerAddr, "artifact-server-addr", "", common.GetOutboundIP().String(), "Defines the address to which the artifact server binds.")
rootCmd.PersistentFlags().StringVarP(&input.artifactServerPort, "artifact-server-port", "", "34567", "Defines the port where the artifact server listens.")
rootCmd.PersistentFlags().BoolVarP(&input.noSkipCheckout, "no-skip-checkout", "", false, "Do not skip actions/checkout")
rootCmd.PersistentFlags().BoolVarP(&input.noCacheServer, "no-cache-server", "", false, "Disable cache server")
rootCmd.PersistentFlags().StringVarP(&input.cacheServerPath, "cache-server-path", "", filepath.Join(CacheHomeDir, "actcache"), "Defines the path where the cache server stores caches.")
rootCmd.PersistentFlags().StringVarP(&input.cacheServerAddr, "cache-server-addr", "", common.GetOutboundIP().String(), "Defines the address to which the cache server binds.")
rootCmd.PersistentFlags().Uint16VarP(&input.cacheServerPort, "cache-server-port", "", 0, "Defines the port where the artifact server listens. 0 means a randomly available port.")
rootCmd.SetArgs(args())
if err := rootCmd.Execute(); err != nil {
@@ -100,6 +94,11 @@ func Execute(ctx context.Context, version string) {
}
func configLocations() []string {
home, err := homedir.Dir()
if err != nil {
log.Fatal(err)
}
configFileName := ".actrc"
// reference: https://specifications.freedesktop.org/basedir-spec/latest/ar01s03.html
@@ -112,39 +111,12 @@ func configLocations() []string {
}
return []string{
filepath.Join(UserHomeDir, configFileName),
filepath.Join(home, configFileName),
actrcXdg,
filepath.Join(".", configFileName),
}
}
var commonSocketPaths = []string{
"/var/run/docker.sock",
"/var/run/podman/podman.sock",
"$HOME/.colima/docker.sock",
"$XDG_RUNTIME_DIR/docker.sock",
`\\.\pipe\docker_engine`,
"$HOME/.docker/run/docker.sock",
}
// returns socket path or false if not found any
func socketLocation() (string, bool) {
if dockerHost, exists := os.LookupEnv("DOCKER_HOST"); exists {
return dockerHost, true
}
for _, p := range commonSocketPaths {
if _, err := os.Lstat(os.ExpandEnv(p)); err == nil {
if strings.HasPrefix(p, `\\.\`) {
return "npipe://" + filepath.ToSlash(os.ExpandEnv(p)), true
}
return "unix://" + filepath.ToSlash(os.ExpandEnv(p)), true
}
}
return "", false
}
func args() []string {
actrc := configLocations()
@@ -158,6 +130,15 @@ func args() []string {
}
func bugReport(ctx context.Context, version string) error {
var commonSocketPaths = []string{
"/var/run/docker.sock",
"/var/run/podman/podman.sock",
"$HOME/.colima/docker.sock",
"$XDG_RUNTIME_DIR/docker.sock",
`\\.\pipe\docker_engine`,
"$HOME/.docker/run/docker.sock",
}
sprintf := func(key, val string) string {
return fmt.Sprintf("%-24s%s\n", key, val)
}
@@ -168,20 +149,19 @@ func bugReport(ctx context.Context, version string) error {
report += sprintf("NumCPU:", fmt.Sprint(runtime.NumCPU()))
var dockerHost string
var exists bool
if dockerHost, exists = os.LookupEnv("DOCKER_HOST"); !exists {
dockerHost = "DOCKER_HOST environment variable is not set"
} else if dockerHost == "" {
dockerHost = "DOCKER_HOST environment variable is empty."
if dockerHost = os.Getenv("DOCKER_HOST"); dockerHost == "" {
dockerHost = "DOCKER_HOST environment variable is unset/empty."
}
report += sprintf("Docker host:", dockerHost)
report += fmt.Sprintln("Sockets found:")
for _, p := range commonSocketPaths {
if _, err := os.Lstat(os.ExpandEnv(p)); err != nil {
if strings.HasPrefix(p, `$`) {
v := strings.Split(p, `/`)[0]
p = strings.Replace(p, v, os.Getenv(strings.TrimPrefix(v, `$`)), 1)
}
if _, err := os.Stat(p); err != nil {
continue
} else if _, err := os.Stat(os.ExpandEnv(p)); err != nil {
report += fmt.Sprintf("\t%s(broken)\n", p)
} else {
report += fmt.Sprintf("\t%s\n", p)
}
@@ -301,26 +281,9 @@ func parseEnvs(env []string, envs map[string]string) bool {
return false
}
func readYamlFile(file string) (map[string]string, error) {
content, err := os.ReadFile(file)
if err != nil {
return nil, err
}
ret := map[string]string{}
if err = yaml.Unmarshal(content, &ret); err != nil {
return nil, err
}
return ret, nil
}
func readEnvs(path string, envs map[string]string) bool {
if _, err := os.Stat(path); err == nil {
var env map[string]string
if ext := filepath.Ext(path); ext == ".yml" || ext == ".yaml" {
env, err = readYamlFile(path)
} else {
env, err = godotenv.Read(path)
}
env, err := godotenv.Read(path)
if err != nil {
log.Fatalf("Error loading from %s: %v", path, err)
}
@@ -332,36 +295,6 @@ func readEnvs(path string, envs map[string]string) bool {
return false
}
func parseMatrix(matrix []string) map[string]map[string]bool {
// each matrix entry should be of the form - string:string
r := regexp.MustCompile(":")
matrixes := make(map[string]map[string]bool)
for _, m := range matrix {
matrix := r.Split(m, 2)
if len(matrix) < 2 {
log.Fatalf("Invalid matrix format. Failed to parse %s", m)
} else {
if _, ok := matrixes[matrix[0]]; !ok {
matrixes[matrix[0]] = make(map[string]bool)
}
matrixes[matrix[0]][matrix[1]] = true
}
}
return matrixes
}
func isDockerHostURI(daemonPath string) bool {
if protoIndex := strings.Index(daemonPath, "://"); protoIndex != -1 {
scheme := daemonPath[:protoIndex]
if strings.IndexFunc(scheme, func(r rune) bool {
return (r < 'a' || r > 'z') && (r < 'A' || r > 'Z')
}) == -1 {
return true
}
}
return false
}
//nolint:gocyclo
func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []string) error {
return func(cmd *cobra.Command, args []string) error {
@@ -373,35 +306,13 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
return bugReport(ctx, cmd.Version)
}
// Prefer DOCKER_HOST, don't override it
socketPath, hasDockerHost := os.LookupEnv("DOCKER_HOST")
if !hasDockerHost {
// a - in containerDaemonSocket means don't mount, preserve this value
// otherwise if input.containerDaemonSocket is a filepath don't use it as socketPath
skipMount := input.containerDaemonSocket == "-" || !isDockerHostURI(input.containerDaemonSocket)
if input.containerDaemonSocket != "" && !skipMount {
socketPath = input.containerDaemonSocket
} else {
socket, found := socketLocation()
if !found {
log.Errorln("daemon Docker Engine socket not found and containerDaemonSocket option was not set")
} else {
socketPath = socket
}
if !skipMount {
input.containerDaemonSocket = socketPath
}
}
os.Setenv("DOCKER_HOST", socketPath)
}
if runtime.GOOS == "darwin" && runtime.GOARCH == "arm64" && input.containerArchitecture == "" {
l := log.New()
l.SetFormatter(&log.TextFormatter{
DisableQuote: true,
DisableTimestamp: true,
})
l.Warnf(" \U000026A0 You are using Apple M-series chip and you have not specified container architecture, you might encounter issues while running act. If so, try running it with '--container-architecture linux/amd64'. \U000026A0 \n")
l.Warnf(" \U000026A0 You are using Apple M1 chip and you have not specified container architecture, you might encounter issues while running act. If so, try running it with '--container-architecture linux/amd64'. \U000026A0 \n")
}
log.Debugf("Loading environment from %s", input.Envfile())
@@ -418,9 +329,6 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
secrets := newSecrets(input.secrets)
_ = readEnvs(input.Secretfile(), secrets)
matrixes := parseMatrix(input.matrix)
log.Debugf("Evaluated matrix inclusions: %v", matrixes)
planner, err := model.NewWorkflowPlanner(input.WorkflowsPath(), input.noWorkflowRecurse)
if err != nil {
return err
@@ -600,7 +508,6 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
RemoteName: input.remoteName,
ReplaceGheActionWithGithubCom: input.replaceGheActionWithGithubCom,
ReplaceGheActionTokenWithGithubCom: input.replaceGheActionTokenWithGithubCom,
Matrix: matrixes,
}
r, err := runner.New(config)
if err != nil {
@@ -609,17 +516,6 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
cancel := artifacts.Serve(ctx, input.artifactServerPath, input.artifactServerAddr, input.artifactServerPort)
const cacheURLKey = "ACTIONS_CACHE_URL"
var cacheHandler *artifactcache.Handler
if !input.noCacheServer && envs[cacheURLKey] == "" {
var err error
cacheHandler, err = artifactcache.StartHandler(input.cacheServerPath, input.cacheServerAddr, input.cacheServerPort, common.Logger(ctx))
if err != nil {
return err
}
envs[cacheURLKey] = cacheHandler.ExternalURL() + "/"
}
ctx = common.WithDryrun(ctx, input.dryrun)
if watch, err := cmd.Flags().GetBool("watch"); err != nil {
return err
@@ -633,7 +529,6 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
executor := r.NewPlanExecutor(plan).Finally(func(ctx context.Context) error {
cancel()
_ = cacheHandler.Close()
return nil
})
err = executor(ctx)
@@ -688,47 +583,45 @@ func defaultImageSurvey(actrc string) error {
}
func watchAndRun(ctx context.Context, fn common.Executor) error {
recurse := true
checkIntervalInSeconds := 2
dir, err := os.Getwd()
if err != nil {
return err
}
ignoreFile := filepath.Join(dir, ".gitignore")
ignore := &gitignore.GitIgnore{}
if info, err := os.Stat(ignoreFile); err == nil && !info.IsDir() {
ignore, err = gitignore.CompileIgnoreFile(ignoreFile)
if err != nil {
return fmt.Errorf("compile %q: %w", ignoreFile, err)
}
var ignore *gitignore.GitIgnore
if _, err := os.Stat(filepath.Join(dir, ".gitignore")); !os.IsNotExist(err) {
ignore, _ = gitignore.CompileIgnoreFile(filepath.Join(dir, ".gitignore"))
} else {
ignore = &gitignore.GitIgnore{}
}
folderWatcher := fswatch.NewFolderWatcher(
dir,
true,
recurse,
ignore.MatchesPath,
2, // 2 seconds
checkIntervalInSeconds,
)
folderWatcher.Start()
defer folderWatcher.Stop()
// run once before watching
if err := fn(ctx); err != nil {
return err
}
for folderWatcher.IsRunning() {
log.Debugf("Watching %s for changes", dir)
select {
case <-ctx.Done():
return nil
case changes := <-folderWatcher.ChangeDetails():
log.Debugf("%s", changes.String())
if err := fn(ctx); err != nil {
return err
go func() {
for folderWatcher.IsRunning() {
if err = fn(ctx); err != nil {
break
}
log.Debugf("Watching %s for changes", dir)
for changes := range folderWatcher.ChangeDetails() {
log.Debugf("%s", changes.String())
if err = fn(ctx); err != nil {
break
}
log.Debugf("Watching %s for changes", dir)
}
}
}
return nil
}()
<-ctx.Done()
folderWatcher.Stop()
return err
}

52
go.mod
View File

@@ -8,51 +8,49 @@ require (
github.com/adrg/xdg v0.4.0
github.com/andreaskoch/go-fswatch v1.0.0
github.com/creack/pty v1.1.18
github.com/docker/cli v23.0.4+incompatible
github.com/docker/cli v23.0.1+incompatible
github.com/docker/distribution v2.8.1+incompatible
github.com/docker/docker v23.0.4+incompatible
github.com/docker/docker v23.0.1+incompatible
github.com/docker/go-connections v0.4.0
github.com/go-git/go-billy/v5 v5.4.1
github.com/go-git/go-git/v5 v5.6.2-0.20230411180853-ce62f3e9ff86
github.com/imdario/mergo v0.3.15
github.com/go-git/go-git/v5 v5.4.2
github.com/imdario/mergo v0.3.13
github.com/joho/godotenv v1.5.1
github.com/julienschmidt/httprouter v1.3.0
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/mattn/go-isatty v0.0.18
github.com/moby/buildkit v0.11.6
github.com/mattn/go-isatty v0.0.17
github.com/mitchellh/go-homedir v1.1.0
github.com/moby/buildkit v0.11.4
github.com/moby/patternmatcher v0.5.0
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b
github.com/opencontainers/image-spec v1.1.0-rc2
github.com/opencontainers/selinux v1.11.0
github.com/pkg/errors v0.9.1
github.com/rhysd/actionlint v1.6.24
github.com/rhysd/actionlint v1.6.23
github.com/sabhiram/go-gitignore v0.0.0-20210923224102-525f6e181f06
github.com/sirupsen/logrus v1.9.0
github.com/spf13/cobra v1.7.0
github.com/spf13/cobra v1.6.1
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.2
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a
go.etcd.io/bbolt v1.3.7
golang.org/x/term v0.7.0
github.com/stretchr/testify v1.8.1
golang.org/x/term v0.6.0
gopkg.in/yaml.v3 v3.0.1
gotest.tools/v3 v3.4.0
)
require (
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 // indirect
github.com/acomagu/bufpipe v1.0.4 // indirect
github.com/cloudflare/circl v1.1.0 // indirect
github.com/containerd/containerd v1.6.20 // indirect
github.com/ProtonMail/go-crypto v0.0.0-20220404123522-616f957b79ad // indirect
github.com/acomagu/bufpipe v1.0.3 // indirect
github.com/containerd/containerd v1.6.18 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/emirpasic/gods v1.18.1 // indirect
github.com/fatih/color v1.15.0 // indirect
github.com/emirpasic/gods v1.12.0 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/go-git/gcfg v1.5.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
github.com/kevinburke/ssh_config v1.2.0 // indirect
github.com/klauspost/compress v1.15.12 // indirect
@@ -63,23 +61,23 @@ require (
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/pjbgf/sha1cd v0.3.0 // indirect
github.com/opencontainers/runc v1.1.3 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/rivo/uniseg v0.4.3 // indirect
github.com/robfig/cron v1.2.0 // indirect
github.com/sergi/go-diff v1.2.0 // indirect
github.com/skeema/knownhosts v1.1.0 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
github.com/xanzy/ssh-agent v0.3.1 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
golang.org/x/crypto v0.6.0 // indirect
golang.org/x/crypto v0.2.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/sys v0.6.0 // indirect
golang.org/x/text v0.7.0 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)
replace github.com/go-git/go-git/v5 => github.com/ZauberNerd/go-git/v5 v5.4.3-0.20220315170230-29ec1bc1e5db

165
go.sum
View File

@@ -5,31 +5,32 @@ github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww=
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
github.com/Microsoft/hcsshim v0.9.8 h1:lf7xxK2+Ikbj9sVf2QZsouGjRjEp2STj1yDHgoVtU5k=
github.com/Microsoft/hcsshim v0.9.6 h1:VwnDOgLeoi2du6dAznfmspNqTiwczvjv4K7NxuY9jsY=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 h1:wPbRQzjjwFc0ih8puEVAOFGELsn1zoIIYdxvML7mDxA=
github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g=
github.com/acomagu/bufpipe v1.0.4 h1:e3H4WUzM3npvo5uv95QuJM3cQspFNtFBzvJ2oNjKIDQ=
github.com/acomagu/bufpipe v1.0.4/go.mod h1:mxdxdup/WdsKVreO5GpW4+M/1CE2sMG4jeGJ2sYmHc4=
github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo=
github.com/ProtonMail/go-crypto v0.0.0-20220404123522-616f957b79ad h1:K3cVQxnwoVf5R2XLZknct3+tJWocEuJUmF7ZGwB2FK8=
github.com/ProtonMail/go-crypto v0.0.0-20220404123522-616f957b79ad/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo=
github.com/ZauberNerd/go-git/v5 v5.4.3-0.20220315170230-29ec1bc1e5db h1:b0xyxkCQ0PQEH7gFQ8D+xa9lb+bur6RgVsRBodaqza4=
github.com/ZauberNerd/go-git/v5 v5.4.3-0.20220315170230-29ec1bc1e5db/go.mod h1:U7oc8MDRtQhVD6StooNkBMVsh/Y4J/2Vl36Mo4IclvM=
github.com/acomagu/bufpipe v1.0.3 h1:fxAGrHZTgQ9w5QqVItgzwj235/uYZYgbXitB+dLupOk=
github.com/acomagu/bufpipe v1.0.3/go.mod h1:mxdxdup/WdsKVreO5GpW4+M/1CE2sMG4jeGJ2sYmHc4=
github.com/adrg/xdg v0.4.0 h1:RzRqFcjH4nE5C6oTAxhBtoE2IRyjBSa62SCbyPidvls=
github.com/adrg/xdg v0.4.0/go.mod h1:N6ag73EX4wyxeaoeHctc1mas01KZgsj5tYiAIwqJE/E=
github.com/andreaskoch/go-fswatch v1.0.0 h1:la8nP/HiaFCxP2IM6NZNUCoxgLWuyNFgH0RligBbnJU=
github.com/andreaskoch/go-fswatch v1.0.0/go.mod h1:r5/iV+4jfwoY2sYqBkg8vpF04ehOvEl4qPptVGdxmqo=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239 h1:kFOfPq6dUM1hTo4JG6LR5AXSUEsOjtdm0kw0FtQtMJA=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/bwesterb/go-ristretto v1.2.0/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0=
github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/cloudflare/circl v1.1.0 h1:bZgT/A+cikZnKIwn7xL2OBj012Bmvho/o6RpRvv3GKY=
github.com/cloudflare/circl v1.1.0/go.mod h1:prBCrKB9DV4poKZY1l9zBXg2QJY7mvgRvtMxxK7fi4I=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/containerd v1.6.20 h1:+itjwpdqXpzHB/QAiWc/BZCjjVfcNgw69w/oIeF4Oy0=
github.com/containerd/containerd v1.6.20/go.mod h1:apei1/i5Ux2FzrK6+DM/suEsGuK/MeVOfy8tR2q7Wnw=
github.com/containerd/containerd v1.6.18 h1:qZbsLvmyu+Vlty0/Ex5xc0z2YtKpIsb5n45mAMI+2Ns=
github.com/containerd/containerd v1.6.18/go.mod h1:1RdCUu95+gc2v9t3IL+zIlpClSmew7/0YS8O5eQZrOw=
github.com/containerd/continuity v0.3.0 h1:nisirsYROK15TAMVukJOUyGJjz4BNQJBVsNvAXZJ/eg=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
@@ -42,12 +43,12 @@ github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxG
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docker/cli v23.0.4+incompatible h1:xClB7PsiATttDHj8ce5qvJcikiApNy7teRR1XkoBZGs=
github.com/docker/cli v23.0.4+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v23.0.1+incompatible h1:LRyWITpGzl2C9e9uGxzisptnxAn1zfZKXy13Ul2Q5oM=
github.com/docker/cli v23.0.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68=
github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v23.0.4+incompatible h1:Kd3Bh9V/rO+XpTP/BLqM+gx8z7+Yb0AA2Ibj+nNo4ek=
github.com/docker/docker v23.0.4+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v23.0.1+incompatible h1:vjgvJZxprTTE1A37nm+CLNAdwu6xZekyoiVlUZEINcY=
github.com/docker/docker v23.0.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A=
github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
@@ -55,22 +56,21 @@ github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5Xh
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ=
github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs=
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
github.com/emirpasic/gods v1.12.0 h1:QAUIPSaCu4G+POclxeqb3F+WPpdKqFGlw36+yOzGlrg=
github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o=
github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/gliderlabs/ssh v0.3.5 h1:OcaySEmAQJgyYcArR+gGGTHCyE7nvhEMTlYY+Dp8CpY=
github.com/gliderlabs/ssh v0.3.5/go.mod h1:8XB4KraRrX39qHhT6yxPsHedjA08I/uBVwj4xC+/+z4=
github.com/gliderlabs/ssh v0.2.2 h1:6zsha5zo/TWhRhwqCD3+EarCAgZ2yN28ipRnGPnwkI0=
github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
github.com/go-git/gcfg v1.5.0 h1:Q5ViNfGF8zFgyJWPqYwA7qGFoMTEiBmdlkcfRmpIMa4=
github.com/go-git/gcfg v1.5.0/go.mod h1:5m20vg6GwYabIxaOonVkTdrILxQMpEShl1xiMF4ua+E=
github.com/go-git/go-billy/v5 v5.3.1/go.mod h1:pmpqyWchKfYfrkb/UVH4otLvyi/5gJlGI4Hb3ZqZ3W0=
github.com/go-git/go-billy/v5 v5.4.1 h1:Uwp5tDRkPr+l/TnbHOQzp+tmJfLceOlbVucgpTz8ix4=
github.com/go-git/go-billy/v5 v5.4.1/go.mod h1:vjbugF6Fz7JIflbVpl1hJsGjSHNltrSw45YK/ukIvQg=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20230305113008-0c11038e723f h1:Pz0DHeFij3XFhoBRGUDPzSJ+w2UcK5/0JvF8DRI58r8=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20230305113008-0c11038e723f/go.mod h1:8LHG1a3SRW71ettAD/jW13h8c6AqjVSeL11RAdgaqpo=
github.com/go-git/go-git/v5 v5.6.2-0.20230411180853-ce62f3e9ff86 h1:vbnwVQwGOr4xwrtcZ1lrhWHPMIgWkhNv+2+smiA2HHk=
github.com/go-git/go-git/v5 v5.6.2-0.20230411180853-ce62f3e9ff86/go.mod h1:Q3/DKr39xeJ3oEAVC8Q1+BlJK3OMsOQsksNb3s+9M1A=
github.com/go-git/go-git-fixtures/v4 v4.3.1 h1:y5z6dd3qi8Hl+stezc8p3JxDkoTRqMAlKnXHuzrfjTQ=
github.com/go-git/go-git-fixtures/v4 v4.3.1/go.mod h1:8LHG1a3SRW71ettAD/jW13h8c6AqjVSeL11RAdgaqpo=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
@@ -86,11 +86,11 @@ github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaU
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec h1:qv2VnGeEQHchGaZ/u7lxST/RaJw+cv273q79D81Xbog=
github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec/go.mod h1:Q48J4R4DvxnHolD5P8pOtXigYlRuPLGl6moFx3ulM68=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk=
github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg=
github.com/imdario/mergo v0.3.15 h1:M8XP7IuFNsqUx6VPK2P9OSmsYsI/YFaGil0uD21V3dM=
github.com/imdario/mergo v0.3.15/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/inconshreveable/mousetrap v1.0.1 h1:U3uMjPSQEBMNp1lFxmllqCPM6P5u/Xq7Pgzkat/bFNc=
github.com/inconshreveable/mousetrap v1.0.1/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4=
@@ -100,6 +100,7 @@ github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4d
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4=
github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
@@ -117,21 +118,25 @@ github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/matryer/is v1.2.0 h1:92UTHpy8CDwaJ08GqLDzhhuixiBUUD1p3AU6PHddz4A=
github.com/matryer/is v1.2.0/go.mod h1:2fLPjFQM9rhQ15aVEtbuwhJinnOqrmgXPNdZsdwlWXA=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.18 h1:DOKFKCQ7FNG2L1rbrmstDN4QVRdS89Nkh85u68Uwp98=
github.com/mattn/go-isatty v0.0.18/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-runewidth v0.0.14 h1:+xnbZSEeDbOIg5/mE6JF0w6n9duR1l3/WmbinWVwUuU=
github.com/mattn/go-runewidth v0.0.14/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b h1:j7+1HpAFS1zy5+Q4qx1fWh90gTKwiN4QCGoY9TWyyO4=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mmcloughlin/avo v0.5.0/go.mod h1:ChHFdoV7ql95Wi7vuq2YT1bwCJqiWdZrQ1im3VujLYM=
github.com/moby/buildkit v0.11.6 h1:VYNdoKk5TVxN7k4RvZgdeM4GOyRvIi4Z8MXOY7xvyUs=
github.com/moby/buildkit v0.11.6/go.mod h1:GCqKfHhz+pddzfgaR7WmHVEE3nKKZMMDPpK8mh3ZLv4=
github.com/moby/buildkit v0.11.4 h1:mleVHr+n7HUD65QNUkgkT3d8muTzhYUoHE9FM3Ej05s=
github.com/moby/buildkit v0.11.4/go.mod h1:P5Qi041LvCfhkfYBHry+Rwoo3Wi6H971J2ggE+PcIoo=
github.com/moby/patternmatcher v0.5.0 h1:YCZgJOeULcxLw1Q+sVR636pmS7sPEn1Qo2iAN6M7DBo=
github.com/moby/patternmatcher v0.5.0/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
@@ -144,26 +149,24 @@ github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b h1:YWuSjZCQAPM8UUBLkYUk1e+rZcvWHJmFb6i6rM44Xs8=
github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
github.com/opencontainers/runc v1.1.5 h1:L44KXEpKmfWDcS02aeGm8QNTFXTo2D+8MYGDIJ/GDEs=
github.com/opencontainers/runc v1.1.5/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg=
github.com/opencontainers/image-spec v1.1.0-rc2 h1:2zx/Stx4Wc5pIPDvIxHXvXtQFW/7XWJGmnM7r3wg034=
github.com/opencontainers/image-spec v1.1.0-rc2/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
github.com/opencontainers/runc v1.1.3 h1:vIXrkId+0/J2Ymu2m7VjGvbSlAId9XNRPhn2p4b+d8w=
github.com/opencontainers/runc v1.1.3/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU=
github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/pjbgf/sha1cd v0.3.0 h1:4D5XXmUUBUl/xQ6IjCkEAbqXskkq/4O7LmGn0AqMDs4=
github.com/pjbgf/sha1cd v0.3.0/go.mod h1:nZ1rrWOcGJ5uZgEEVL1VUM9iRQiZvWdbZjkKyFzPPsI=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rhysd/actionlint v1.6.24 h1:5f61cF5ssP2pzG0jws5bEsfZBNhbBcO9nl7vTzVKjzs=
github.com/rhysd/actionlint v1.6.24/go.mod h1:gQmz9r2wlcpLy+VdbzK0GINJQnAK5/sNH3BpwW4Mt5I=
github.com/rhysd/actionlint v1.6.23 h1:041VOXgZddfvSJa9Il+WT3Iwuo/j0Nmu4bhpAScrds4=
github.com/rhysd/actionlint v1.6.23/go.mod h1:o5qc1K3I9taGMBhL7mVkpRd64hx3YqI+3t8ewGfYXfE=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.4 h1:8TfxU8dW6PdqD27gjM8MVNuicgxIjxpm4K7x4jp8sis=
github.com/rivo/uniseg v0.4.4/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rivo/uniseg v0.4.3 h1:utMvzDsuh3suAEnhH0RdHmoPbU648o6CvXxTx4SBMOw=
github.com/rivo/uniseg v0.4.3/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/robfig/cron v1.2.0 h1:ZjScXvvxeQ63Dbyxy76Fj3AT3Ut0aKsyd2/tl3DTMuQ=
github.com/robfig/cron v1.2.0/go.mod h1:JGuDeoQd7Z6yL4zQhZ3OPEVHB7fL6Ka6skscFHfmt2k=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
@@ -180,10 +183,8 @@ github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/skeema/knownhosts v1.1.0 h1:Wvr9V0MxhjRbl3f9nMnKnFfiWTJmtECJ9Njkea3ysW0=
github.com/skeema/knownhosts v1.1.0/go.mod h1:sKFq3RD6/TKZkSWn8boUbDC7Qkgcv+8XXijpFO6roag=
github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I=
github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0=
github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA=
github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
@@ -199,16 +200,14 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a h1:oIi7H/bwFUYKYhzKbHc+3MvHRWqhQwXVB4LweLMiVy0=
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a/go.mod h1:iSvujNDmpZ6eQX+bg/0X3lF7LEmZ8N77g2a/J/+Zt2U=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM=
github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw=
github.com/xanzy/ssh-agent v0.3.1 h1:AmzO1SSWxw73zxFZPRwaMN1MohDw8UyHnmuxyceTEGo=
github.com/xanzy/ssh-agent v0.3.1/go.mod h1:QIE4lCeL7nkC25x+yA3LBIYfwCc1TFziCtG7cBAac6w=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0=
@@ -217,25 +216,15 @@ github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.etcd.io/bbolt v1.3.7 h1:j+zJOnnEjF/kyHlDDgGnVL/AIqIJPq8UoB2GSNfkUfQ=
go.etcd.io/bbolt v1.3.7/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
golang.org/x/arch v0.1.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220525230936-793ad666bf5e/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220826181053-bd7e27e6170d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/crypto v0.6.0 h1:qfktjS5LUO+fFKeJXZ+ikTRijMmljikvG68fpMMruSc=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.2.0 h1:BRXPfhNivWL5Yq0BGQ39a2sW6t44aODpfxkWjYdzewE=
golang.org/x/crypto v0.2.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@@ -243,17 +232,12 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220826154423-83b083e8dc8b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.0.0-20210326060303-6b1517762897/go.mod h1:uSPa2vr4CLtc/ILN5odXGNXS6mhrKVzTaCXzk9m6W3k=
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -263,45 +247,32 @@ golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220422013727-9388b58f7150/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220825204002-c680a09ffe64/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210503060354-a79de5458b56/go.mod h1:tfny5GFUkzUvx4ps4ajbZsCe5lw1metzhBm9T3x7oIY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20220722155259-a9ba230a4035/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.7.0 h1:BEvjmm5fURWqcfbSKTdpkDXYBrUS1c0m8agp14W48vQ=
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.6.0 h1:clScbb1cHjoCkyRbWwBEUZ5H/tIFu5TAXIqaZD0Gcjw=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.1.0 h1:xYY+Bajn2a7VBmTM5GikTmnK8ZuX8YgnQCqZpbBNtmA=
@@ -311,8 +282,6 @@ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -328,6 +297,7 @@ gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
@@ -338,4 +308,3 @@ gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o=
gotest.tools/v3 v3.4.0/go.mod h1:CtbdzLSsqVhDgMtKsx03ird5YTGB3ar27v0u/yKBW5g=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=

View File

@@ -1,8 +0,0 @@
// Package artifactcache provides a cache handler for the runner.
//
// Inspired by https://github.com/sp-ricard-valverde/github-act-cache-server
//
// TODO: Authorization
// TODO: Restrictions for accessing a cache, see https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache
// TODO: Force deleting cache entries, see https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#force-deleting-cache-entries
package artifactcache

View File

@@ -1,488 +0,0 @@
package artifactcache
import (
"encoding/json"
"errors"
"fmt"
"io"
"net"
"net/http"
"os"
"path/filepath"
"strconv"
"strings"
"sync/atomic"
"time"
"github.com/julienschmidt/httprouter"
"github.com/sirupsen/logrus"
"github.com/timshannon/bolthold"
"go.etcd.io/bbolt"
"github.com/nektos/act/pkg/common"
)
const (
urlBase = "/_apis/artifactcache"
)
type Handler struct {
db *bolthold.Store
storage *Storage
router *httprouter.Router
listener net.Listener
server *http.Server
logger logrus.FieldLogger
gcing int32 // TODO: use atomic.Bool when we can use Go 1.19
gcAt time.Time
outboundIP string
}
func StartHandler(dir, outboundIP string, port uint16, logger logrus.FieldLogger) (*Handler, error) {
h := &Handler{}
if logger == nil {
discard := logrus.New()
discard.Out = io.Discard
logger = discard
}
logger = logger.WithField("module", "artifactcache")
h.logger = logger
if dir == "" {
home, err := os.UserHomeDir()
if err != nil {
return nil, err
}
dir = filepath.Join(home, ".cache", "actcache")
}
if err := os.MkdirAll(dir, 0o755); err != nil {
return nil, err
}
db, err := bolthold.Open(filepath.Join(dir, "bolt.db"), 0o644, &bolthold.Options{
Encoder: json.Marshal,
Decoder: json.Unmarshal,
Options: &bbolt.Options{
Timeout: 5 * time.Second,
NoGrowSync: bbolt.DefaultOptions.NoGrowSync,
FreelistType: bbolt.DefaultOptions.FreelistType,
},
})
if err != nil {
return nil, err
}
h.db = db
storage, err := NewStorage(filepath.Join(dir, "cache"))
if err != nil {
return nil, err
}
h.storage = storage
if outboundIP != "" {
h.outboundIP = outboundIP
} else if ip := common.GetOutboundIP(); ip == nil {
return nil, fmt.Errorf("unable to determine outbound IP address")
} else {
h.outboundIP = ip.String()
}
router := httprouter.New()
router.GET(urlBase+"/cache", h.middleware(h.find))
router.POST(urlBase+"/caches", h.middleware(h.reserve))
router.PATCH(urlBase+"/caches/:id", h.middleware(h.upload))
router.POST(urlBase+"/caches/:id", h.middleware(h.commit))
router.GET(urlBase+"/artifacts/:id", h.middleware(h.get))
router.POST(urlBase+"/clean", h.middleware(h.clean))
h.router = router
h.gcCache()
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", port)) // listen on all interfaces
if err != nil {
return nil, err
}
server := &http.Server{
ReadHeaderTimeout: 2 * time.Second,
Handler: router,
}
go func() {
if err := server.Serve(listener); err != nil && errors.Is(err, net.ErrClosed) {
logger.Errorf("http serve: %v", err)
}
}()
h.listener = listener
h.server = server
return h, nil
}
func (h *Handler) ExternalURL() string {
// TODO: make the external url configurable if necessary
return fmt.Sprintf("http://%s:%d",
h.outboundIP,
h.listener.Addr().(*net.TCPAddr).Port)
}
func (h *Handler) Close() error {
if h == nil {
return nil
}
var retErr error
if h.server != nil {
err := h.server.Close()
if err != nil {
retErr = err
}
h.server = nil
}
if h.listener != nil {
err := h.listener.Close()
if errors.Is(err, net.ErrClosed) {
err = nil
}
if err != nil {
retErr = err
}
h.listener = nil
}
if h.db != nil {
err := h.db.Close()
if err != nil {
retErr = err
}
h.db = nil
}
return retErr
}
// GET /_apis/artifactcache/cache
func (h *Handler) find(w http.ResponseWriter, r *http.Request, _ httprouter.Params) {
keys := strings.Split(r.URL.Query().Get("keys"), ",")
// cache keys are case insensitive
for i, key := range keys {
keys[i] = strings.ToLower(key)
}
version := r.URL.Query().Get("version")
cache, err := h.findCache(keys, version)
if err != nil {
h.responseJSON(w, r, 500, err)
return
}
if cache == nil {
h.responseJSON(w, r, 204)
return
}
if ok, err := h.storage.Exist(cache.ID); err != nil {
h.responseJSON(w, r, 500, err)
return
} else if !ok {
_ = h.db.Delete(cache.ID, cache)
h.responseJSON(w, r, 204)
return
}
h.responseJSON(w, r, 200, map[string]any{
"result": "hit",
"archiveLocation": fmt.Sprintf("%s%s/artifacts/%d", h.ExternalURL(), urlBase, cache.ID),
"cacheKey": cache.Key,
})
}
// POST /_apis/artifactcache/caches
func (h *Handler) reserve(w http.ResponseWriter, r *http.Request, _ httprouter.Params) {
api := &Request{}
if err := json.NewDecoder(r.Body).Decode(api); err != nil {
h.responseJSON(w, r, 400, err)
return
}
// cache keys are case insensitive
api.Key = strings.ToLower(api.Key)
cache := api.ToCache()
cache.FillKeyVersionHash()
if err := h.db.FindOne(cache, bolthold.Where("KeyVersionHash").Eq(cache.KeyVersionHash)); err != nil {
if !errors.Is(err, bolthold.ErrNotFound) {
h.responseJSON(w, r, 500, err)
return
}
} else {
h.responseJSON(w, r, 400, fmt.Errorf("already exist"))
return
}
now := time.Now().Unix()
cache.CreatedAt = now
cache.UsedAt = now
if err := h.db.Insert(bolthold.NextSequence(), cache); err != nil {
h.responseJSON(w, r, 500, err)
return
}
// write back id to db
if err := h.db.Update(cache.ID, cache); err != nil {
h.responseJSON(w, r, 500, err)
return
}
h.responseJSON(w, r, 200, map[string]any{
"cacheId": cache.ID,
})
}
// PATCH /_apis/artifactcache/caches/:id
func (h *Handler) upload(w http.ResponseWriter, r *http.Request, params httprouter.Params) {
id, err := strconv.ParseInt(params.ByName("id"), 10, 64)
if err != nil {
h.responseJSON(w, r, 400, err)
return
}
cache := &Cache{}
if err := h.db.Get(id, cache); err != nil {
if errors.Is(err, bolthold.ErrNotFound) {
h.responseJSON(w, r, 400, fmt.Errorf("cache %d: not reserved", id))
return
}
h.responseJSON(w, r, 500, err)
return
}
if cache.Complete {
h.responseJSON(w, r, 400, fmt.Errorf("cache %v %q: already complete", cache.ID, cache.Key))
return
}
start, _, err := parseContentRange(r.Header.Get("Content-Range"))
if err != nil {
h.responseJSON(w, r, 400, err)
return
}
if err := h.storage.Write(cache.ID, start, r.Body); err != nil {
h.responseJSON(w, r, 500, err)
}
h.useCache(id)
h.responseJSON(w, r, 200)
}
// POST /_apis/artifactcache/caches/:id
func (h *Handler) commit(w http.ResponseWriter, r *http.Request, params httprouter.Params) {
id, err := strconv.ParseInt(params.ByName("id"), 10, 64)
if err != nil {
h.responseJSON(w, r, 400, err)
return
}
cache := &Cache{}
if err := h.db.Get(id, cache); err != nil {
if errors.Is(err, bolthold.ErrNotFound) {
h.responseJSON(w, r, 400, fmt.Errorf("cache %d: not reserved", id))
return
}
h.responseJSON(w, r, 500, err)
return
}
if cache.Complete {
h.responseJSON(w, r, 400, fmt.Errorf("cache %v %q: already complete", cache.ID, cache.Key))
return
}
if err := h.storage.Commit(cache.ID, cache.Size); err != nil {
h.responseJSON(w, r, 500, err)
return
}
cache.Complete = true
if err := h.db.Update(cache.ID, cache); err != nil {
h.responseJSON(w, r, 500, err)
return
}
h.responseJSON(w, r, 200)
}
// GET /_apis/artifactcache/artifacts/:id
func (h *Handler) get(w http.ResponseWriter, r *http.Request, params httprouter.Params) {
id, err := strconv.ParseInt(params.ByName("id"), 10, 64)
if err != nil {
h.responseJSON(w, r, 400, err)
return
}
h.useCache(id)
h.storage.Serve(w, r, uint64(id))
}
// POST /_apis/artifactcache/clean
func (h *Handler) clean(w http.ResponseWriter, r *http.Request, _ httprouter.Params) {
// TODO: don't support force deleting cache entries
// see: https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#force-deleting-cache-entries
h.responseJSON(w, r, 200)
}
func (h *Handler) middleware(handler httprouter.Handle) httprouter.Handle {
return func(w http.ResponseWriter, r *http.Request, params httprouter.Params) {
h.logger.Debugf("%s %s", r.Method, r.RequestURI)
handler(w, r, params)
go h.gcCache()
}
}
// if not found, return (nil, nil) instead of an error.
func (h *Handler) findCache(keys []string, version string) (*Cache, error) {
if len(keys) == 0 {
return nil, nil
}
key := keys[0] // the first key is for exact match.
cache := &Cache{
Key: key,
Version: version,
}
cache.FillKeyVersionHash()
if err := h.db.FindOne(cache, bolthold.Where("KeyVersionHash").Eq(cache.KeyVersionHash)); err != nil {
if !errors.Is(err, bolthold.ErrNotFound) {
return nil, err
}
} else if cache.Complete {
return cache, nil
}
stop := fmt.Errorf("stop")
for _, prefix := range keys[1:] {
found := false
if err := h.db.ForEach(bolthold.Where("Key").Ge(prefix).And("Version").Eq(version).SortBy("Key"), func(v *Cache) error {
if !strings.HasPrefix(v.Key, prefix) {
return stop
}
if v.Complete {
cache = v
found = true
return stop
}
return nil
}); err != nil {
if !errors.Is(err, stop) {
return nil, err
}
}
if found {
return cache, nil
}
}
return nil, nil
}
func (h *Handler) useCache(id int64) {
cache := &Cache{}
if err := h.db.Get(id, cache); err != nil {
return
}
cache.UsedAt = time.Now().Unix()
_ = h.db.Update(cache.ID, cache)
}
func (h *Handler) gcCache() {
if atomic.LoadInt32(&h.gcing) != 0 {
return
}
if !atomic.CompareAndSwapInt32(&h.gcing, 0, 1) {
return
}
defer atomic.StoreInt32(&h.gcing, 0)
if time.Since(h.gcAt) < time.Hour {
h.logger.Debugf("skip gc: %v", h.gcAt.String())
return
}
h.gcAt = time.Now()
h.logger.Debugf("gc: %v", h.gcAt.String())
const (
keepUsed = 30 * 24 * time.Hour
keepUnused = 7 * 24 * time.Hour
keepTemp = 5 * time.Minute
)
var caches []*Cache
if err := h.db.Find(&caches, bolthold.Where("UsedAt").Lt(time.Now().Add(-keepTemp).Unix())); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
if cache.Complete {
continue
}
h.storage.Remove(cache.ID)
if err := h.db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
caches = caches[:0]
if err := h.db.Find(&caches, bolthold.Where("UsedAt").Lt(time.Now().Add(-keepUnused).Unix())); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
h.storage.Remove(cache.ID)
if err := h.db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
caches = caches[:0]
if err := h.db.Find(&caches, bolthold.Where("CreatedAt").Lt(time.Now().Add(-keepUsed).Unix())); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
h.storage.Remove(cache.ID)
if err := h.db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
}
func (h *Handler) responseJSON(w http.ResponseWriter, r *http.Request, code int, v ...any) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
var data []byte
if len(v) == 0 || v[0] == nil {
data, _ = json.Marshal(struct{}{})
} else if err, ok := v[0].(error); ok {
h.logger.Errorf("%v %v: %v", r.Method, r.RequestURI, err)
data, _ = json.Marshal(map[string]any{
"error": err.Error(),
})
} else {
data, _ = json.Marshal(v[0])
}
w.WriteHeader(code)
_, _ = w.Write(data)
}
func parseContentRange(s string) (int64, int64, error) {
// support the format like "bytes 11-22/*" only
s, _, _ = strings.Cut(strings.TrimPrefix(s, "bytes "), "/")
s1, s2, _ := strings.Cut(s, "-")
start, err := strconv.ParseInt(s1, 10, 64)
if err != nil {
return 0, 0, fmt.Errorf("parse %q: %w", s, err)
}
stop, err := strconv.ParseInt(s2, 10, 64)
if err != nil {
return 0, 0, fmt.Errorf("parse %q: %w", s, err)
}
return start, stop, nil
}

View File

@@ -1,469 +0,0 @@
package artifactcache
import (
"bytes"
"crypto/rand"
"encoding/json"
"fmt"
"io"
"net/http"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.etcd.io/bbolt"
)
func TestHandler(t *testing.T) {
dir := filepath.Join(t.TempDir(), "artifactcache")
handler, err := StartHandler(dir, "", 0, nil)
require.NoError(t, err)
base := fmt.Sprintf("%s%s", handler.ExternalURL(), urlBase)
defer func() {
t.Run("inpect db", func(t *testing.T) {
require.NoError(t, handler.db.Bolt().View(func(tx *bbolt.Tx) error {
return tx.Bucket([]byte("Cache")).ForEach(func(k, v []byte) error {
t.Logf("%s: %s", k, v)
return nil
})
}))
})
t.Run("close", func(t *testing.T) {
require.NoError(t, handler.Close())
assert.Nil(t, handler.server)
assert.Nil(t, handler.listener)
assert.Nil(t, handler.db)
_, err := http.Post(fmt.Sprintf("%s/caches/%d", base, 1), "", nil)
assert.Error(t, err)
})
}()
t.Run("get not exist", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, key, version))
require.NoError(t, err)
require.Equal(t, 204, resp.StatusCode)
})
t.Run("reserve and upload", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
content := make([]byte, 100)
_, err := rand.Read(content)
require.NoError(t, err)
uploadCacheNormally(t, base, key, version, content)
})
t.Run("clean", func(t *testing.T) {
resp, err := http.Post(fmt.Sprintf("%s/clean", base), "", nil)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
})
t.Run("reserve with bad request", func(t *testing.T) {
body := []byte(`invalid json`)
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
})
t.Run("duplicate reserve", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
{
body, err := json.Marshal(&Request{
Key: key,
Version: version,
Size: 100,
})
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
got := struct {
CacheID uint64 `json:"cacheId"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
}
{
body, err := json.Marshal(&Request{
Key: key,
Version: version,
Size: 100,
})
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
}
})
t.Run("upload with bad id", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPatch,
fmt.Sprintf("%s/caches/invalid_id", base), bytes.NewReader(nil))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Range", "bytes 0-99/*")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
})
t.Run("upload without reserve", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPatch,
fmt.Sprintf("%s/caches/%d", base, 1000), bytes.NewReader(nil))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Range", "bytes 0-99/*")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
})
t.Run("upload with complete", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
var id uint64
content := make([]byte, 100)
_, err := rand.Read(content)
require.NoError(t, err)
{
body, err := json.Marshal(&Request{
Key: key,
Version: version,
Size: 100,
})
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
got := struct {
CacheID uint64 `json:"cacheId"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
id = got.CacheID
}
{
req, err := http.NewRequest(http.MethodPatch,
fmt.Sprintf("%s/caches/%d", base, id), bytes.NewReader(content))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Range", "bytes 0-99/*")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
}
{
resp, err := http.Post(fmt.Sprintf("%s/caches/%d", base, id), "", nil)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
}
{
req, err := http.NewRequest(http.MethodPatch,
fmt.Sprintf("%s/caches/%d", base, id), bytes.NewReader(content))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Range", "bytes 0-99/*")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
}
})
t.Run("upload with invalid range", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
var id uint64
content := make([]byte, 100)
_, err := rand.Read(content)
require.NoError(t, err)
{
body, err := json.Marshal(&Request{
Key: key,
Version: version,
Size: 100,
})
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
got := struct {
CacheID uint64 `json:"cacheId"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
id = got.CacheID
}
{
req, err := http.NewRequest(http.MethodPatch,
fmt.Sprintf("%s/caches/%d", base, id), bytes.NewReader(content))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Range", "bytes xx-99/*")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
}
})
t.Run("commit with bad id", func(t *testing.T) {
{
resp, err := http.Post(fmt.Sprintf("%s/caches/invalid_id", base), "", nil)
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
}
})
t.Run("commit with not exist id", func(t *testing.T) {
{
resp, err := http.Post(fmt.Sprintf("%s/caches/%d", base, 100), "", nil)
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
}
})
t.Run("duplicate commit", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
var id uint64
content := make([]byte, 100)
_, err := rand.Read(content)
require.NoError(t, err)
{
body, err := json.Marshal(&Request{
Key: key,
Version: version,
Size: 100,
})
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
got := struct {
CacheID uint64 `json:"cacheId"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
id = got.CacheID
}
{
req, err := http.NewRequest(http.MethodPatch,
fmt.Sprintf("%s/caches/%d", base, id), bytes.NewReader(content))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Range", "bytes 0-99/*")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
}
{
resp, err := http.Post(fmt.Sprintf("%s/caches/%d", base, id), "", nil)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
}
{
resp, err := http.Post(fmt.Sprintf("%s/caches/%d", base, id), "", nil)
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
}
})
t.Run("commit early", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
var id uint64
content := make([]byte, 100)
_, err := rand.Read(content)
require.NoError(t, err)
{
body, err := json.Marshal(&Request{
Key: key,
Version: version,
Size: 100,
})
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
got := struct {
CacheID uint64 `json:"cacheId"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
id = got.CacheID
}
{
req, err := http.NewRequest(http.MethodPatch,
fmt.Sprintf("%s/caches/%d", base, id), bytes.NewReader(content[:50]))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Range", "bytes 0-59/*")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
}
{
resp, err := http.Post(fmt.Sprintf("%s/caches/%d", base, id), "", nil)
require.NoError(t, err)
assert.Equal(t, 500, resp.StatusCode)
}
})
t.Run("get with bad id", func(t *testing.T) {
resp, err := http.Get(fmt.Sprintf("%s/artifacts/invalid_id", base))
require.NoError(t, err)
require.Equal(t, 400, resp.StatusCode)
})
t.Run("get with not exist id", func(t *testing.T) {
resp, err := http.Get(fmt.Sprintf("%s/artifacts/%d", base, 100))
require.NoError(t, err)
require.Equal(t, 404, resp.StatusCode)
})
t.Run("get with not exist id", func(t *testing.T) {
resp, err := http.Get(fmt.Sprintf("%s/artifacts/%d", base, 100))
require.NoError(t, err)
require.Equal(t, 404, resp.StatusCode)
})
t.Run("get with multiple keys", func(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b",
key + "_a_b_c",
}
contents := [3][]byte{
make([]byte, 100),
make([]byte, 200),
make([]byte, 300),
}
for i := range contents {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
}
reqKeys := strings.Join([]string{
key + "_a_b_x",
key + "_a_b",
key + "_a",
}, ",")
var archiveLocation string
{
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got := struct {
Result string `json:"result"`
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, keys[1], got.CacheKey)
archiveLocation = got.ArchiveLocation
}
{
resp, err := http.Get(archiveLocation) //nolint:gosec
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got, err := io.ReadAll(resp.Body)
require.NoError(t, err)
assert.Equal(t, contents[1], got)
}
})
t.Run("case insensitive", func(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
content := make([]byte, 100)
_, err := rand.Read(content)
require.NoError(t, err)
uploadCacheNormally(t, base, key+"_ABC", version, content)
{
reqKey := key + "_aBc"
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKey, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got := struct {
Result string `json:"result"`
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, key+"_abc", got.CacheKey)
}
})
}
func uploadCacheNormally(t *testing.T, base, key, version string, content []byte) {
var id uint64
{
body, err := json.Marshal(&Request{
Key: key,
Version: version,
Size: int64(len(content)),
})
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
got := struct {
CacheID uint64 `json:"cacheId"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
id = got.CacheID
}
{
req, err := http.NewRequest(http.MethodPatch,
fmt.Sprintf("%s/caches/%d", base, id), bytes.NewReader(content))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Range", "bytes 0-99/*")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
}
{
resp, err := http.Post(fmt.Sprintf("%s/caches/%d", base, id), "", nil)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
}
var archiveLocation string
{
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, key, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got := struct {
Result string `json:"result"`
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, strings.ToLower(key), got.CacheKey)
archiveLocation = got.ArchiveLocation
}
{
resp, err := http.Get(archiveLocation) //nolint:gosec
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got, err := io.ReadAll(resp.Body)
require.NoError(t, err)
assert.Equal(t, content, got)
}
}

View File

@@ -1,38 +0,0 @@
package artifactcache
import (
"crypto/sha256"
"fmt"
)
type Request struct {
Key string `json:"key" `
Version string `json:"version"`
Size int64 `json:"cacheSize"`
}
func (c *Request) ToCache() *Cache {
if c == nil {
return nil
}
return &Cache{
Key: c.Key,
Version: c.Version,
Size: c.Size,
}
}
type Cache struct {
ID uint64 `json:"id" boltholdKey:"ID"`
Key string `json:"key" boltholdIndex:"Key"`
Version string `json:"version" boltholdIndex:"Version"`
KeyVersionHash string `json:"keyVersionHash" boltholdUnique:"KeyVersionHash"`
Size int64 `json:"cacheSize"`
Complete bool `json:"complete"`
UsedAt int64 `json:"usedAt" boltholdIndex:"UsedAt"`
CreatedAt int64 `json:"createdAt" boltholdIndex:"CreatedAt"`
}
func (c *Cache) FillKeyVersionHash() {
c.KeyVersionHash = fmt.Sprintf("%x", sha256.Sum256([]byte(fmt.Sprintf("%s:%s", c.Key, c.Version))))
}

View File

@@ -1,126 +0,0 @@
package artifactcache
import (
"fmt"
"io"
"net/http"
"os"
"path/filepath"
)
type Storage struct {
rootDir string
}
func NewStorage(rootDir string) (*Storage, error) {
if err := os.MkdirAll(rootDir, 0o755); err != nil {
return nil, err
}
return &Storage{
rootDir: rootDir,
}, nil
}
func (s *Storage) Exist(id uint64) (bool, error) {
name := s.filename(id)
if _, err := os.Stat(name); os.IsNotExist(err) {
return false, nil
} else if err != nil {
return false, err
}
return true, nil
}
func (s *Storage) Write(id uint64, offset int64, reader io.Reader) error {
name := s.tempName(id, offset)
if err := os.MkdirAll(filepath.Dir(name), 0o755); err != nil {
return err
}
file, err := os.Create(name)
if err != nil {
return err
}
defer file.Close()
_, err = io.Copy(file, reader)
return err
}
func (s *Storage) Commit(id uint64, size int64) error {
defer func() {
_ = os.RemoveAll(s.tempDir(id))
}()
name := s.filename(id)
tempNames, err := s.tempNames(id)
if err != nil {
return err
}
if err := os.MkdirAll(filepath.Dir(name), 0o755); err != nil {
return err
}
file, err := os.Create(name)
if err != nil {
return err
}
defer file.Close()
var written int64
for _, v := range tempNames {
f, err := os.Open(v)
if err != nil {
return err
}
n, err := io.Copy(file, f)
_ = f.Close()
if err != nil {
return err
}
written += n
}
if written != size {
_ = file.Close()
_ = os.Remove(name)
return fmt.Errorf("broken file: %v != %v", written, size)
}
return nil
}
func (s *Storage) Serve(w http.ResponseWriter, r *http.Request, id uint64) {
name := s.filename(id)
http.ServeFile(w, r, name)
}
func (s *Storage) Remove(id uint64) {
_ = os.Remove(s.filename(id))
_ = os.RemoveAll(s.tempDir(id))
}
func (s *Storage) filename(id uint64) string {
return filepath.Join(s.rootDir, fmt.Sprintf("%02x", id%0xff), fmt.Sprint(id))
}
func (s *Storage) tempDir(id uint64) string {
return filepath.Join(s.rootDir, "tmp", fmt.Sprint(id))
}
func (s *Storage) tempName(id uint64, offset int64) string {
return filepath.Join(s.tempDir(id), fmt.Sprintf("%016x", offset))
}
func (s *Storage) tempNames(id uint64) ([]string, error) {
dir := s.tempDir(id)
files, err := os.ReadDir(dir)
if err != nil {
return nil, err
}
var names []string
for _, v := range files {
if !v.IsDir() {
names = append(names, filepath.Join(dir, v.Name()))
}
}
return names, nil
}

View File

@@ -1,30 +0,0 @@
# Copied from https://github.com/actions/cache#example-cache-workflow
name: Caching Primes
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- run: env
- uses: actions/checkout@v3
- name: Cache Primes
id: cache-primes
uses: actions/cache@v3
with:
path: prime-numbers
key: ${{ runner.os }}-primes-${{ github.run_id }}
restore-keys: |
${{ runner.os }}-primes
${{ runner.os }}
- name: Generate Prime Numbers
if: steps.cache-primes.outputs.cache-hit != 'true'
run: cat /proc/sys/kernel/random/uuid > prime-numbers
- name: Use Prime Numbers
run: cat prime-numbers

View File

@@ -25,3 +25,24 @@ func Logger(ctx context.Context) logrus.FieldLogger {
func WithLogger(ctx context.Context, logger logrus.FieldLogger) context.Context {
return context.WithValue(ctx, loggerContextKeyVal, logger)
}
type loggerHookKey string
const loggerHookKeyVal = loggerHookKey("logrus.Hook")
// LoggerHook returns the appropriate logger hook for current context
// the hook affects job logger, not global logger
func LoggerHook(ctx context.Context) logrus.Hook {
val := ctx.Value(loggerHookKeyVal)
if val != nil {
if hook, ok := val.(logrus.Hook); ok {
return hook
}
}
return nil
}
// WithLoggerHook adds a value to the context for the logger hook
func WithLoggerHook(ctx context.Context, hook logrus.Hook) context.Context {
return context.WithValue(ctx, loggerHookKeyVal, hook)
}

View File

@@ -2,74 +2,20 @@ package common
import (
"net"
"sort"
"strings"
log "github.com/sirupsen/logrus"
)
// GetOutboundIP returns an outbound IP address of this machine.
// It tries to access the internet and returns the local IP address of the connection.
// If the machine cannot access the internet, it returns a preferred IP address from network interfaces.
// It returns nil if no IP address is found.
// https://stackoverflow.com/a/37382208
// Get preferred outbound ip of this machine
func GetOutboundIP() net.IP {
// See https://stackoverflow.com/a/37382208
conn, err := net.Dial("udp", "8.8.8.8:80")
if err == nil {
defer conn.Close()
return conn.LocalAddr().(*net.UDPAddr).IP
if err != nil {
log.Fatal(err)
}
defer conn.Close()
// So the machine cannot access the internet. Pick an IP address from network interfaces.
if ifs, err := net.Interfaces(); err == nil {
type IP struct {
net.IP
net.Interface
}
var ips []IP
for _, i := range ifs {
if addrs, err := i.Addrs(); err == nil {
for _, addr := range addrs {
var ip net.IP
switch v := addr.(type) {
case *net.IPNet:
ip = v.IP
case *net.IPAddr:
ip = v.IP
}
if ip.IsGlobalUnicast() {
ips = append(ips, IP{ip, i})
}
}
}
}
if len(ips) > 1 {
sort.Slice(ips, func(i, j int) bool {
ifi := ips[i].Interface
ifj := ips[j].Interface
localAddr := conn.LocalAddr().(*net.UDPAddr)
// ethernet is preferred
if vi, vj := strings.HasPrefix(ifi.Name, "e"), strings.HasPrefix(ifj.Name, "e"); vi != vj {
return vi
}
ipi := ips[i].IP
ipj := ips[j].IP
// IPv4 is preferred
if vi, vj := ipi.To4() != nil, ipj.To4() != nil; vi != vj {
return vi
}
// en0 is preferred to en1
if ifi.Name != ifj.Name {
return ifi.Name < ifj.Name
}
// fallback
return ipi.String() < ipj.String()
})
return ips[0].IP
}
}
return nil
return localAddr.IP
}

View File

@@ -26,6 +26,9 @@ type NewContainerInput struct {
UsernsMode string
Platform string
Options string
// Gitea specific
AutoRemove bool
}
// FileEntry is a file to copy to a container

View File

@@ -189,6 +189,9 @@ type containerReference struct {
}
func GetDockerClient(ctx context.Context) (cli client.APIClient, err error) {
// TODO: this should maybe need to be a global option, not hidden in here?
// though i'm not sure how that works out when there's another Executor :D
// I really would like something that works on OSX native for eg
dockerHost := os.Getenv("DOCKER_HOST")
if strings.HasPrefix(dockerHost, "ssh://") {
@@ -345,12 +348,6 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config
return nil, nil, fmt.Errorf("Cannot parse container options: '%s': '%w'", input.Options, err)
}
if len(copts.netMode.Value()) == 0 {
if err = copts.netMode.Set("host"); err != nil {
return nil, nil, fmt.Errorf("Cannot parse networkmode=host. This is an internal error and should not happen: '%w'", err)
}
}
containerConfig, err := parse(flags, copts, "")
if err != nil {
return nil, nil, fmt.Errorf("Cannot process container options: '%s': '%w'", input.Options, err)
@@ -437,6 +434,7 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
NetworkMode: container.NetworkMode(input.NetworkMode),
Privileged: input.Privileged,
UsernsMode: container.UsernsMode(input.UsernsMode),
AutoRemove: input.AutoRemove,
}
logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)

View File

@@ -10,6 +10,4 @@ type ExecutionsEnvironment interface {
DefaultPathVariable() string
JoinPathVariable(...string) string
GetRunnerContext(ctx context.Context) map[string]interface{}
// On windows PATH and Path are the same key
IsEnvironmentCaseInsensitive() bool
}

View File

@@ -419,7 +419,3 @@ func (e *HostEnvironment) ReplaceLogWriter(stdout io.Writer, stderr io.Writer) (
e.StdOut = stdout
return org, org
}
func (*HostEnvironment) IsEnvironmentCaseInsensitive() bool {
return runtime.GOOS == "windows"
}

View File

@@ -71,7 +71,3 @@ func (*LinuxContainerEnvironmentExtensions) GetRunnerContext(ctx context.Context
"tool_cache": "/opt/hostedtoolcache",
}
}
func (*LinuxContainerEnvironmentExtensions) IsEnvironmentCaseInsensitive() bool {
return false
}

185
pkg/jobparser/evaluator.go Normal file
View File

@@ -0,0 +1,185 @@
package jobparser
import (
"fmt"
"regexp"
"strings"
"github.com/nektos/act/pkg/exprparser"
"gopkg.in/yaml.v3"
)
// ExpressionEvaluator is copied from runner.expressionEvaluator,
// to avoid unnecessary dependencies
type ExpressionEvaluator struct {
interpreter exprparser.Interpreter
}
func NewExpressionEvaluator(interpreter exprparser.Interpreter) *ExpressionEvaluator {
return &ExpressionEvaluator{interpreter: interpreter}
}
func (ee ExpressionEvaluator) evaluate(in string, defaultStatusCheck exprparser.DefaultStatusCheck) (interface{}, error) {
evaluated, err := ee.interpreter.Evaluate(in, defaultStatusCheck)
return evaluated, err
}
func (ee ExpressionEvaluator) evaluateScalarYamlNode(node *yaml.Node) error {
var in string
if err := node.Decode(&in); err != nil {
return err
}
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
return nil
}
expr, _ := rewriteSubExpression(in, false)
res, err := ee.evaluate(expr, exprparser.DefaultStatusCheckNone)
if err != nil {
return err
}
return node.Encode(res)
}
func (ee ExpressionEvaluator) evaluateMappingYamlNode(node *yaml.Node) error {
// GitHub has this undocumented feature to merge maps, called insert directive
insertDirective := regexp.MustCompile(`\${{\s*insert\s*}}`)
for i := 0; i < len(node.Content)/2; {
k := node.Content[i*2]
v := node.Content[i*2+1]
if err := ee.EvaluateYamlNode(v); err != nil {
return err
}
var sk string
// Merge the nested map of the insert directive
if k.Decode(&sk) == nil && insertDirective.MatchString(sk) {
node.Content = append(append(node.Content[:i*2], v.Content...), node.Content[(i+1)*2:]...)
i += len(v.Content) / 2
} else {
if err := ee.EvaluateYamlNode(k); err != nil {
return err
}
i++
}
}
return nil
}
func (ee ExpressionEvaluator) evaluateSequenceYamlNode(node *yaml.Node) error {
for i := 0; i < len(node.Content); {
v := node.Content[i]
// Preserve nested sequences
wasseq := v.Kind == yaml.SequenceNode
if err := ee.EvaluateYamlNode(v); err != nil {
return err
}
// GitHub has this undocumented feature to merge sequences / arrays
// We have a nested sequence via evaluation, merge the arrays
if v.Kind == yaml.SequenceNode && !wasseq {
node.Content = append(append(node.Content[:i], v.Content...), node.Content[i+1:]...)
i += len(v.Content)
} else {
i++
}
}
return nil
}
func (ee ExpressionEvaluator) EvaluateYamlNode(node *yaml.Node) error {
switch node.Kind {
case yaml.ScalarNode:
return ee.evaluateScalarYamlNode(node)
case yaml.MappingNode:
return ee.evaluateMappingYamlNode(node)
case yaml.SequenceNode:
return ee.evaluateSequenceYamlNode(node)
default:
return nil
}
}
func (ee ExpressionEvaluator) Interpolate(in string) string {
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
return in
}
expr, _ := rewriteSubExpression(in, true)
evaluated, err := ee.evaluate(expr, exprparser.DefaultStatusCheckNone)
if err != nil {
return ""
}
value, ok := evaluated.(string)
if !ok {
panic(fmt.Sprintf("Expression %s did not evaluate to a string", expr))
}
return value
}
func escapeFormatString(in string) string {
return strings.ReplaceAll(strings.ReplaceAll(in, "{", "{{"), "}", "}}")
}
func rewriteSubExpression(in string, forceFormat bool) (string, error) {
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
return in, nil
}
strPattern := regexp.MustCompile("(?:''|[^'])*'")
pos := 0
exprStart := -1
strStart := -1
var results []string
formatOut := ""
for pos < len(in) {
if strStart > -1 {
matches := strPattern.FindStringIndex(in[pos:])
if matches == nil {
panic("unclosed string.")
}
strStart = -1
pos += matches[1]
} else if exprStart > -1 {
exprEnd := strings.Index(in[pos:], "}}")
strStart = strings.Index(in[pos:], "'")
if exprEnd > -1 && strStart > -1 {
if exprEnd < strStart {
strStart = -1
} else {
exprEnd = -1
}
}
if exprEnd > -1 {
formatOut += fmt.Sprintf("{%d}", len(results))
results = append(results, strings.TrimSpace(in[exprStart:pos+exprEnd]))
pos += exprEnd + 2
exprStart = -1
} else if strStart > -1 {
pos += strStart + 1
} else {
panic("unclosed expression.")
}
} else {
exprStart = strings.Index(in[pos:], "${{")
if exprStart != -1 {
formatOut += escapeFormatString(in[pos : pos+exprStart])
exprStart = pos + exprStart + 3
pos = exprStart
} else {
formatOut += escapeFormatString(in[pos:])
pos = len(in)
}
}
}
if len(results) == 1 && formatOut == "{0}" && !forceFormat {
return in, nil
}
out := fmt.Sprintf("format('%s', %s)", strings.ReplaceAll(formatOut, "'", "''"), strings.Join(results, ", "))
return out, nil
}

View File

@@ -0,0 +1,81 @@
package jobparser
import (
"github.com/nektos/act/pkg/exprparser"
"github.com/nektos/act/pkg/model"
"gopkg.in/yaml.v3"
)
// NewInterpeter returns an interpeter used in the server,
// need github, needs, strategy, matrix, inputs context only,
// see https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability
func NewInterpeter(
jobID string,
job *model.Job,
matrix map[string]interface{},
gitCtx *model.GithubContext,
results map[string]*JobResult,
) exprparser.Interpreter {
strategy := make(map[string]interface{})
if job.Strategy != nil {
strategy["fail-fast"] = job.Strategy.FailFast
strategy["max-parallel"] = job.Strategy.MaxParallel
}
run := &model.Run{
Workflow: &model.Workflow{
Jobs: map[string]*model.Job{},
},
JobID: jobID,
}
for id, result := range results {
need := yaml.Node{}
_ = need.Encode(result.Needs)
run.Workflow.Jobs[id] = &model.Job{
RawNeeds: need,
Result: result.Result,
Outputs: result.Outputs,
}
}
jobs := run.Workflow.Jobs
jobNeeds := run.Job().Needs()
using := map[string]exprparser.Needs{}
for _, need := range jobNeeds {
if v, ok := jobs[need]; ok {
using[need] = exprparser.Needs{
Outputs: v.Outputs,
Result: v.Result,
}
}
}
ee := &exprparser.EvaluationEnvironment{
Github: gitCtx,
Env: nil, // no need
Job: nil, // no need
Steps: nil, // no need
Runner: nil, // no need
Secrets: nil, // no need
Strategy: strategy,
Matrix: matrix,
Needs: using,
Inputs: nil, // not supported yet
}
config := exprparser.Config{
Run: run,
WorkingDir: "", // WorkingDir is used for the function hashFiles, but it's not needed in the server
Context: "job",
}
return exprparser.NewInterpeter(ee, config)
}
// JobResult is the minimum requirement of job results for Interpeter
type JobResult struct {
Needs []string
Result string
Outputs map[string]string
}

153
pkg/jobparser/jobparser.go Normal file
View File

@@ -0,0 +1,153 @@
package jobparser
import (
"bytes"
"fmt"
"sort"
"strings"
"gopkg.in/yaml.v3"
"github.com/nektos/act/pkg/model"
)
func Parse(content []byte, options ...ParseOption) ([]*SingleWorkflow, error) {
origin, err := model.ReadWorkflow(bytes.NewReader(content))
if err != nil {
return nil, fmt.Errorf("model.ReadWorkflow: %w", err)
}
workflow := &SingleWorkflow{}
if err := yaml.Unmarshal(content, workflow); err != nil {
return nil, fmt.Errorf("yaml.Unmarshal: %w", err)
}
pc := &parseContext{}
for _, o := range options {
o(pc)
}
results := map[string]*JobResult{}
for id, job := range origin.Jobs {
results[id] = &JobResult{
Needs: job.Needs(),
Result: pc.jobResults[id],
Outputs: nil, // not supported yet
}
}
var ret []*SingleWorkflow
for id, job := range workflow.Jobs {
for _, matrix := range getMatrixes(origin.GetJob(id)) {
job := job.Clone()
if job.Name == "" {
job.Name = id
}
job.Name = nameWithMatrix(job.Name, matrix)
job.Strategy.RawMatrix = encodeMatrix(matrix)
evaluator := NewExpressionEvaluator(NewInterpeter(id, origin.GetJob(id), matrix, pc.gitContext, results))
runsOn := origin.GetJob(id).RunsOn()
for i, v := range runsOn {
runsOn[i] = evaluator.Interpolate(v)
}
job.RawRunsOn = encodeRunsOn(runsOn)
job.EraseNeeds() // there will be only one job in SingleWorkflow, it cannot have needs
ret = append(ret, &SingleWorkflow{
Name: workflow.Name,
RawOn: workflow.RawOn,
Env: workflow.Env,
Jobs: map[string]*Job{id: job},
Defaults: workflow.Defaults,
})
}
}
sortWorkflows(ret)
return ret, nil
}
func WithJobResults(results map[string]string) ParseOption {
return func(c *parseContext) {
c.jobResults = results
}
}
func WithGitContext(context *model.GithubContext) ParseOption {
return func(c *parseContext) {
c.gitContext = context
}
}
type parseContext struct {
jobResults map[string]string
gitContext *model.GithubContext
}
type ParseOption func(c *parseContext)
func getMatrixes(job *model.Job) []map[string]interface{} {
ret := job.GetMatrixes()
sort.Slice(ret, func(i, j int) bool {
return matrixName(ret[i]) < matrixName(ret[j])
})
return ret
}
func encodeMatrix(matrix map[string]interface{}) yaml.Node {
if len(matrix) == 0 {
return yaml.Node{}
}
value := map[string][]interface{}{}
for k, v := range matrix {
value[k] = []interface{}{v}
}
node := yaml.Node{}
_ = node.Encode(value)
return node
}
func encodeRunsOn(runsOn []string) yaml.Node {
node := yaml.Node{}
if len(runsOn) == 1 {
_ = node.Encode(runsOn[0])
} else {
_ = node.Encode(runsOn)
}
return node
}
func nameWithMatrix(name string, m map[string]interface{}) string {
if len(m) == 0 {
return name
}
return name + " " + matrixName(m)
}
func matrixName(m map[string]interface{}) string {
ks := make([]string, 0, len(m))
for k := range m {
ks = append(ks, k)
}
sort.Strings(ks)
vs := make([]string, 0, len(m))
for _, v := range ks {
vs = append(vs, fmt.Sprint(m[v]))
}
return fmt.Sprintf("(%s)", strings.Join(vs, ", "))
}
func sortWorkflows(wfs []*SingleWorkflow) {
sort.Slice(wfs, func(i, j int) bool {
ki := ""
for k := range wfs[i].Jobs {
ki = k
break
}
kj := ""
for k := range wfs[j].Jobs {
kj = k
break
}
return ki < kj
})
}

View File

@@ -0,0 +1,65 @@
package jobparser
import (
"embed"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
)
//go:embed testdata
var f embed.FS
func TestParse(t *testing.T) {
tests := []struct {
name string
options []ParseOption
wantErr bool
}{
{
name: "multiple_jobs",
options: nil,
wantErr: false,
},
{
name: "multiple_matrix",
options: nil,
wantErr: false,
},
{
name: "has_needs",
options: nil,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
content, err := f.ReadFile(filepath.Join("testdata", tt.name+".in.yaml"))
require.NoError(t, err)
want, err := f.ReadFile(filepath.Join("testdata", tt.name+".out.yaml"))
require.NoError(t, err)
got, err := Parse(content, tt.options...)
if tt.wantErr {
require.Error(t, err)
}
require.NoError(t, err)
builder := &strings.Builder{}
for _, v := range got {
if builder.Len() > 0 {
builder.WriteString("---\n")
}
encoder := yaml.NewEncoder(builder)
encoder.SetIndent(2)
_ = encoder.Encode(v)
}
assert.Equal(t, string(want), builder.String())
})
}
}

207
pkg/jobparser/model.go Normal file
View File

@@ -0,0 +1,207 @@
package jobparser
import (
"fmt"
"github.com/nektos/act/pkg/model"
"gopkg.in/yaml.v3"
)
// SingleWorkflow is a workflow with single job and single matrix
type SingleWorkflow struct {
Name string `yaml:"name,omitempty"`
RawOn yaml.Node `yaml:"on,omitempty"`
Env map[string]string `yaml:"env,omitempty"`
Jobs map[string]*Job `yaml:"jobs,omitempty"`
Defaults Defaults `yaml:"defaults,omitempty"`
}
func (w *SingleWorkflow) Job() (string, *Job) {
for k, v := range w.Jobs {
return k, v
}
return "", nil
}
func (w *SingleWorkflow) Marshal() ([]byte, error) {
return yaml.Marshal(w)
}
type Job struct {
Name string `yaml:"name,omitempty"`
RawNeeds yaml.Node `yaml:"needs,omitempty"`
RawRunsOn yaml.Node `yaml:"runs-on,omitempty"`
Env yaml.Node `yaml:"env,omitempty"`
If yaml.Node `yaml:"if,omitempty"`
Steps []*Step `yaml:"steps,omitempty"`
TimeoutMinutes string `yaml:"timeout-minutes,omitempty"`
Services map[string]*ContainerSpec `yaml:"services,omitempty"`
Strategy Strategy `yaml:"strategy,omitempty"`
RawContainer yaml.Node `yaml:"container,omitempty"`
Defaults Defaults `yaml:"defaults,omitempty"`
Outputs map[string]string `yaml:"outputs,omitempty"`
Uses string `yaml:"uses,omitempty"`
}
func (j *Job) Clone() *Job {
if j == nil {
return nil
}
return &Job{
Name: j.Name,
RawNeeds: j.RawNeeds,
RawRunsOn: j.RawRunsOn,
Env: j.Env,
If: j.If,
Steps: j.Steps,
TimeoutMinutes: j.TimeoutMinutes,
Services: j.Services,
Strategy: j.Strategy,
RawContainer: j.RawContainer,
Defaults: j.Defaults,
Outputs: j.Outputs,
Uses: j.Uses,
}
}
func (j *Job) Needs() []string {
return (&model.Job{RawNeeds: j.RawNeeds}).Needs()
}
func (j *Job) EraseNeeds() {
j.RawNeeds = yaml.Node{}
}
func (j *Job) RunsOn() []string {
return (&model.Job{RawRunsOn: j.RawRunsOn}).RunsOn()
}
type Step struct {
ID string `yaml:"id,omitempty"`
If yaml.Node `yaml:"if,omitempty"`
Name string `yaml:"name,omitempty"`
Uses string `yaml:"uses,omitempty"`
Run string `yaml:"run,omitempty"`
WorkingDirectory string `yaml:"working-directory,omitempty"`
Shell string `yaml:"shell,omitempty"`
Env yaml.Node `yaml:"env,omitempty"`
With map[string]string `yaml:"with,omitempty"`
ContinueOnError bool `yaml:"continue-on-error,omitempty"`
TimeoutMinutes string `yaml:"timeout-minutes,omitempty"`
}
// String gets the name of step
func (s *Step) String() string {
return (&model.Step{
ID: s.ID,
Name: s.Name,
Uses: s.Uses,
Run: s.Run,
}).String()
}
type ContainerSpec struct {
Image string `yaml:"image,omitempty"`
Env map[string]string `yaml:"env,omitempty"`
Ports []string `yaml:"ports,omitempty"`
Volumes []string `yaml:"volumes,omitempty"`
Options string `yaml:"options,omitempty"`
Credentials map[string]string `yaml:"credentials,omitempty"`
}
type Strategy struct {
FailFastString string `yaml:"fail-fast,omitempty"`
MaxParallelString string `yaml:"max-parallel,omitempty"`
RawMatrix yaml.Node `yaml:"matrix,omitempty"`
}
type Defaults struct {
Run RunDefaults `yaml:"run,omitempty"`
}
type RunDefaults struct {
Shell string `yaml:"shell,omitempty"`
WorkingDirectory string `yaml:"working-directory,omitempty"`
}
type Event struct {
Name string
Acts map[string][]string
}
func ParseRawOn(rawOn *yaml.Node) ([]*Event, error) {
switch rawOn.Kind {
case yaml.ScalarNode:
var val string
err := rawOn.Decode(&val)
if err != nil {
return nil, err
}
return []*Event{
{Name: val},
}, nil
case yaml.SequenceNode:
var val []interface{}
err := rawOn.Decode(&val)
if err != nil {
return nil, err
}
res := make([]*Event, 0, len(val))
for _, v := range val {
switch t := v.(type) {
case string:
res = append(res, &Event{Name: t})
default:
return nil, fmt.Errorf("invalid type %T", t)
}
}
return res, nil
case yaml.MappingNode:
var val map[string]interface{}
err := rawOn.Decode(&val)
if err != nil {
return nil, err
}
res := make([]*Event, 0, len(val))
for k, v := range val {
switch t := v.(type) {
case string:
res = append(res, &Event{
Name: k,
Acts: map[string][]string{},
})
case []string:
res = append(res, &Event{
Name: k,
Acts: map[string][]string{},
})
case map[string]interface{}:
acts := make(map[string][]string, len(t))
for act, branches := range t {
switch b := branches.(type) {
case string:
acts[act] = []string{b}
case []string:
acts[act] = b
case []interface{}:
acts[act] = make([]string, len(b))
for i, v := range b {
acts[act][i] = v.(string)
}
default:
return nil, fmt.Errorf("unknown on type: %#v", branches)
}
}
res = append(res, &Event{
Name: k,
Acts: acts,
})
default:
return nil, fmt.Errorf("unknown on type: %#v", v)
}
}
return res, nil
default:
return nil, fmt.Errorf("unknown on type: %v", rawOn.Kind)
}
}

184
pkg/jobparser/model_test.go Normal file
View File

@@ -0,0 +1,184 @@
package jobparser
import (
"fmt"
"strings"
"testing"
"github.com/nektos/act/pkg/model"
"github.com/stretchr/testify/assert"
)
func TestParseRawOn(t *testing.T) {
kases := []struct {
input string
result []*Event
}{
{
input: "on: issue_comment",
result: []*Event{
{
Name: "issue_comment",
},
},
},
{
input: "on:\n push",
result: []*Event{
{
Name: "push",
},
},
},
{
input: "on:\n - push\n - pull_request",
result: []*Event{
{
Name: "push",
},
{
Name: "pull_request",
},
},
},
{
input: "on:\n push:\n branches:\n - master",
result: []*Event{
{
Name: "push",
Acts: map[string][]string{
"branches": {
"master",
},
},
},
},
},
{
input: "on:\n branch_protection_rule:\n types: [created, deleted]",
result: []*Event{
{
Name: "branch_protection_rule",
Acts: map[string][]string{
"types": {
"created",
"deleted",
},
},
},
},
},
{
input: "on:\n project:\n types: [created, deleted]\n milestone:\n types: [opened, deleted]",
result: []*Event{
{
Name: "project",
Acts: map[string][]string{
"types": {
"created",
"deleted",
},
},
},
{
Name: "milestone",
Acts: map[string][]string{
"types": {
"opened",
"deleted",
},
},
},
},
},
{
input: "on:\n pull_request:\n types:\n - opened\n branches:\n - 'releases/**'",
result: []*Event{
{
Name: "pull_request",
Acts: map[string][]string{
"types": {
"opened",
},
"branches": {
"releases/**",
},
},
},
},
},
{
input: "on:\n push:\n branches:\n - main\n pull_request:\n types:\n - opened\n branches:\n - '**'",
result: []*Event{
{
Name: "push",
Acts: map[string][]string{
"branches": {
"main",
},
},
},
{
Name: "pull_request",
Acts: map[string][]string{
"types": {
"opened",
},
"branches": {
"**",
},
},
},
},
},
{
input: "on:\n push:\n branches:\n - 'main'\n - 'releases/**'",
result: []*Event{
{
Name: "push",
Acts: map[string][]string{
"branches": {
"main",
"releases/**",
},
},
},
},
},
{
input: "on:\n push:\n tags:\n - v1.**",
result: []*Event{
{
Name: "push",
Acts: map[string][]string{
"tags": {
"v1.**",
},
},
},
},
},
{
input: "on: [pull_request, workflow_dispatch]",
result: []*Event{
{
Name: "pull_request",
},
{
Name: "workflow_dispatch",
},
},
},
}
for _, kase := range kases {
t.Run(kase.input, func(t *testing.T) {
origin, err := model.ReadWorkflow(strings.NewReader(kase.input))
assert.NoError(t, err)
events, err := ParseRawOn(&origin.RawOn)
assert.NoError(t, err)
assert.EqualValues(t, kase.result, events, fmt.Sprintf("%#v", events))
})
}
}

View File

@@ -0,0 +1,16 @@
name: test
jobs:
job1:
runs-on: linux
steps:
- run: uname -a
job2:
runs-on: linux
steps:
- run: uname -a
needs: job1
job3:
runs-on: linux
steps:
- run: uname -a
needs: [job1, job2]

View File

@@ -0,0 +1,23 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
steps:
- run: uname -a
---
name: test
jobs:
job2:
name: job2
runs-on: linux
steps:
- run: uname -a
---
name: test
jobs:
job3:
name: job3
runs-on: linux
steps:
- run: uname -a

View File

@@ -0,0 +1,14 @@
name: test
jobs:
job1:
runs-on: linux
steps:
- run: uname -a && go version
job2:
runs-on: linux
steps:
- run: uname -a && go version
job3:
runs-on: linux
steps:
- run: uname -a && go version

View File

@@ -0,0 +1,23 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
steps:
- run: uname -a && go version
---
name: test
jobs:
job2:
name: job2
runs-on: linux
steps:
- run: uname -a && go version
---
name: test
jobs:
job3:
name: job3
runs-on: linux
steps:
- run: uname -a && go version

View File

@@ -0,0 +1,13 @@
name: test
jobs:
job1:
strategy:
matrix:
os: [ubuntu-22.04, ubuntu-20.04]
version: [1.17, 1.18, 1.19]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version

View File

@@ -0,0 +1,101 @@
name: test
jobs:
job1:
name: job1 (ubuntu-20.04, 1.17)
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.17
---
name: test
jobs:
job1:
name: job1 (ubuntu-20.04, 1.18)
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.18
---
name: test
jobs:
job1:
name: job1 (ubuntu-20.04, 1.19)
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.19
---
name: test
jobs:
job1:
name: job1 (ubuntu-22.04, 1.17)
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.17
---
name: test
jobs:
job1:
name: job1 (ubuntu-22.04, 1.18)
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.18
---
name: test
jobs:
job1:
name: job1 (ubuntu-22.04, 1.19)
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.19

View File

@@ -20,7 +20,7 @@ func (a *ActionRunsUsing) UnmarshalYAML(unmarshal func(interface{}) error) error
// Force input to lowercase for case insensitive comparison
format := ActionRunsUsing(strings.ToLower(using))
switch format {
case ActionRunsUsingNode16, ActionRunsUsingNode12, ActionRunsUsingDocker, ActionRunsUsingComposite:
case ActionRunsUsingNode16, ActionRunsUsingNode12, ActionRunsUsingDocker, ActionRunsUsingComposite, ActionRunsUsingGo:
*a = format
default:
return fmt.Errorf(fmt.Sprintf("The runs.using key in action.yml must be one of: %v, got %s", []string{
@@ -28,6 +28,7 @@ func (a *ActionRunsUsing) UnmarshalYAML(unmarshal func(interface{}) error) error
ActionRunsUsingDocker,
ActionRunsUsingNode12,
ActionRunsUsingNode16,
ActionRunsUsingGo,
}, format))
}
return nil
@@ -42,6 +43,8 @@ const (
ActionRunsUsingDocker = "docker"
// ActionRunsUsingComposite for running composite
ActionRunsUsingComposite = "composite"
// ActionRunsUsingGo for running with go
ActionRunsUsingGo = "go"
)
// ActionRuns are a field in Action

View File

@@ -36,9 +36,6 @@ type GithubContext struct {
RetentionDays string `json:"retention_days"`
RunnerPerflog string `json:"runner_perflog"`
RunnerTrackingID string `json:"runner_tracking_id"`
ServerURL string `json:"server_url"`
APIURL string `json:"api_url"`
GraphQLURL string `json:"graphql_url"`
}
func asString(v interface{}) string {

View File

@@ -164,6 +164,13 @@ func NewWorkflowPlanner(path string, noWorkflowRecurse bool) (WorkflowPlanner, e
return wp, nil
}
// CombineWorkflowPlanner combines workflows to a WorkflowPlanner
func CombineWorkflowPlanner(workflows ...*Workflow) WorkflowPlanner {
return &workflowPlanner{
workflows: workflows,
}
}
type workflowPlanner struct {
workflows []*Workflow
}

View File

@@ -58,14 +58,39 @@ func (w *Workflow) On() []string {
func (w *Workflow) OnEvent(event string) interface{} {
if w.RawOn.Kind == yaml.MappingNode {
var val map[string]interface{}
if !decodeNode(w.RawOn, &val) {
return nil
err := w.RawOn.Decode(&val)
if err != nil {
log.Fatal(err)
}
return val[event]
}
return nil
}
func (w *Workflow) OnSchedule() []string {
schedules := w.OnEvent("schedule")
if schedules == nil {
return []string{}
}
switch val := schedules.(type) {
case []interface{}:
allSchedules := []string{}
for _, v := range val {
for k, cron := range v.(map[string]interface{}) {
if k != "cron" {
continue
}
allSchedules = append(allSchedules, cron.(string))
}
}
return allSchedules
default:
}
return []string{}
}
type WorkflowDispatchInput struct {
Description string `yaml:"description"`
Required bool `yaml:"required"`
@@ -84,14 +109,16 @@ func (w *Workflow) WorkflowDispatchConfig() *WorkflowDispatch {
}
var val map[string]yaml.Node
if !decodeNode(w.RawOn, &val) {
return nil
err := w.RawOn.Decode(&val)
if err != nil {
log.Fatal(err)
}
var config WorkflowDispatch
node := val["workflow_dispatch"]
if !decodeNode(node, &config) {
return nil
err = node.Decode(&config)
if err != nil {
log.Fatal(err)
}
return &config
@@ -120,19 +147,20 @@ type WorkflowCallResult struct {
func (w *Workflow) WorkflowCallConfig() *WorkflowCall {
if w.RawOn.Kind != yaml.MappingNode {
// The callers expect for "on: workflow_call" and "on: [ workflow_call ]" a non nil return value
return &WorkflowCall{}
return nil
}
var val map[string]yaml.Node
if !decodeNode(w.RawOn, &val) {
return &WorkflowCall{}
err := w.RawOn.Decode(&val)
if err != nil {
log.Fatal(err)
}
var config WorkflowCall
node := val["workflow_call"]
if !decodeNode(node, &config) {
return &WorkflowCall{}
err = node.Decode(&config)
if err != nil {
log.Fatal(err)
}
return &config
@@ -215,8 +243,9 @@ func (j *Job) InheritSecrets() bool {
}
var val string
if !decodeNode(j.RawSecrets, &val) {
return false
err := j.RawSecrets.Decode(&val)
if err != nil {
log.Fatal(err)
}
return val == "inherit"
@@ -228,8 +257,9 @@ func (j *Job) Secrets() map[string]string {
}
var val map[string]string
if !decodeNode(j.RawSecrets, &val) {
return nil
err := j.RawSecrets.Decode(&val)
if err != nil {
log.Fatal(err)
}
return val
@@ -241,13 +271,15 @@ func (j *Job) Container() *ContainerSpec {
switch j.RawContainer.Kind {
case yaml.ScalarNode:
val = new(ContainerSpec)
if !decodeNode(j.RawContainer, &val.Image) {
return nil
err := j.RawContainer.Decode(&val.Image)
if err != nil {
log.Fatal(err)
}
case yaml.MappingNode:
val = new(ContainerSpec)
if !decodeNode(j.RawContainer, val) {
return nil
err := j.RawContainer.Decode(val)
if err != nil {
log.Fatal(err)
}
}
return val
@@ -258,14 +290,16 @@ func (j *Job) Needs() []string {
switch j.RawNeeds.Kind {
case yaml.ScalarNode:
var val string
if !decodeNode(j.RawNeeds, &val) {
return nil
err := j.RawNeeds.Decode(&val)
if err != nil {
log.Fatal(err)
}
return []string{val}
case yaml.SequenceNode:
var val []string
if !decodeNode(j.RawNeeds, &val) {
return nil
err := j.RawNeeds.Decode(&val)
if err != nil {
log.Fatal(err)
}
return val
}
@@ -277,14 +311,16 @@ func (j *Job) RunsOn() []string {
switch j.RawRunsOn.Kind {
case yaml.ScalarNode:
var val string
if !decodeNode(j.RawRunsOn, &val) {
return nil
err := j.RawRunsOn.Decode(&val)
if err != nil {
log.Fatal(err)
}
return []string{val}
case yaml.SequenceNode:
var val []string
if !decodeNode(j.RawRunsOn, &val) {
return nil
err := j.RawRunsOn.Decode(&val)
if err != nil {
log.Fatal(err)
}
return val
}
@@ -294,8 +330,8 @@ func (j *Job) RunsOn() []string {
func environment(yml yaml.Node) map[string]string {
env := make(map[string]string)
if yml.Kind == yaml.MappingNode {
if !decodeNode(yml, &env) {
return nil
if err := yml.Decode(&env); err != nil {
log.Fatal(err)
}
}
return env
@@ -310,8 +346,8 @@ func (j *Job) Environment() map[string]string {
func (j *Job) Matrix() map[string][]interface{} {
if j.Strategy.RawMatrix.Kind == yaml.MappingNode {
var val map[string][]interface{}
if !decodeNode(j.Strategy.RawMatrix, &val) {
return nil
if err := j.Strategy.RawMatrix.Decode(&val); err != nil {
log.Fatal(err)
}
return val
}
@@ -322,7 +358,7 @@ func (j *Job) Matrix() map[string][]interface{} {
// It skips includes and hard fails excludes for non-existing keys
//
//nolint:gocyclo
func (j *Job) GetMatrixes() ([]map[string]interface{}, error) {
func (j *Job) GetMatrixes() []map[string]interface{} {
matrixes := make([]map[string]interface{}, 0)
if j.Strategy != nil {
j.Strategy.FailFast = j.Strategy.GetFailFast()
@@ -373,7 +409,7 @@ func (j *Job) GetMatrixes() ([]map[string]interface{}, error) {
excludes = append(excludes, e)
} else {
// We fail completely here because that's what GitHub does for non-existing matrix keys, fail on exclude, silent skip on include
return nil, fmt.Errorf("the workflow is not valid. Matrix exclude key %q does not match any key within the matrix", k)
log.Fatalf("The workflow is not valid. Matrix exclude key '%s' does not match any key within the matrix", k)
}
}
}
@@ -418,7 +454,7 @@ func (j *Job) GetMatrixes() ([]map[string]interface{}, error) {
} else {
matrixes = append(matrixes, make(map[string]interface{}))
}
return matrixes, nil
return matrixes
}
func commonKeysMatch(a map[string]interface{}, b map[string]interface{}) bool {
@@ -492,6 +528,7 @@ type ContainerSpec struct {
// Step is the structure of one step in a job
type Step struct {
Number int `yaml:"-"`
ID string `yaml:"id"`
If yaml.Node `yaml:"if"`
Name string `yaml:"name"`
@@ -538,7 +575,7 @@ func (s *Step) GetEnv() map[string]string {
func (s *Step) ShellCommand() string {
shellCommand := ""
//Reference: https://github.com/actions/runner/blob/8109c962f09d9acc473d92c595ff43afceddb347/src/Runner.Worker/Handlers/ScriptHandlerHelpers.cs#L9-L17
// Reference: https://github.com/actions/runner/blob/8109c962f09d9acc473d92c595ff43afceddb347/src/Runner.Worker/Handlers/ScriptHandlerHelpers.cs#L9-L17
switch s.Shell {
case "", "bash":
shellCommand = "bash --noprofile --norc -e -o pipefail {0}"
@@ -658,17 +695,3 @@ func (w *Workflow) GetJobIDs() []string {
}
return ids
}
var OnDecodeNodeError = func(node yaml.Node, out interface{}, err error) {
log.Fatalf("Failed to decode node %v into %T: %v", node, out, err)
}
func decodeNode(node yaml.Node, out interface{}) bool {
if err := node.Decode(out); err != nil {
if OnDecodeNodeError != nil {
OnDecodeNodeError(node, out, err)
}
return false
}
return true
}

View File

@@ -7,6 +7,88 @@ import (
"github.com/stretchr/testify/assert"
)
func TestReadWorkflow_ScheduleEvent(t *testing.T) {
yaml := `
name: local-action-docker-url
on:
schedule:
- cron: '30 5 * * 1,3'
- cron: '30 5 * * 2,4'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: ./actions/docker-url
`
workflow, err := ReadWorkflow(strings.NewReader(yaml))
assert.NoError(t, err, "read workflow should succeed")
schedules := workflow.OnEvent("schedule")
assert.Len(t, schedules, 2)
newSchedules := workflow.OnSchedule()
assert.Len(t, newSchedules, 2)
assert.Equal(t, "30 5 * * 1,3", newSchedules[0])
assert.Equal(t, "30 5 * * 2,4", newSchedules[1])
yaml = `
name: local-action-docker-url
on:
schedule:
test: '30 5 * * 1,3'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: ./actions/docker-url
`
workflow, err = ReadWorkflow(strings.NewReader(yaml))
assert.NoError(t, err, "read workflow should succeed")
newSchedules = workflow.OnSchedule()
assert.Len(t, newSchedules, 0)
yaml = `
name: local-action-docker-url
on:
schedule:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: ./actions/docker-url
`
workflow, err = ReadWorkflow(strings.NewReader(yaml))
assert.NoError(t, err, "read workflow should succeed")
newSchedules = workflow.OnSchedule()
assert.Len(t, newSchedules, 0)
yaml = `
name: local-action-docker-url
on: [push, tag]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: ./actions/docker-url
`
workflow, err = ReadWorkflow(strings.NewReader(yaml))
assert.NoError(t, err, "read workflow should succeed")
newSchedules = workflow.OnSchedule()
assert.Len(t, newSchedules, 0)
}
func TestReadWorkflow_StringEvent(t *testing.T) {
yaml := `
name: local-action-docker-url
@@ -250,33 +332,25 @@ func TestReadWorkflow_Strategy(t *testing.T) {
wf := p.Stages[0].Runs[0].Workflow
job := wf.Jobs["strategy-only-max-parallel"]
matrixes, err := job.GetMatrixes()
assert.NoError(t, err)
assert.Equal(t, matrixes, []map[string]interface{}{{}})
assert.Equal(t, job.GetMatrixes(), []map[string]interface{}{{}})
assert.Equal(t, job.Matrix(), map[string][]interface{}(nil))
assert.Equal(t, job.Strategy.MaxParallel, 2)
assert.Equal(t, job.Strategy.FailFast, true)
job = wf.Jobs["strategy-only-fail-fast"]
matrixes, err = job.GetMatrixes()
assert.NoError(t, err)
assert.Equal(t, matrixes, []map[string]interface{}{{}})
assert.Equal(t, job.GetMatrixes(), []map[string]interface{}{{}})
assert.Equal(t, job.Matrix(), map[string][]interface{}(nil))
assert.Equal(t, job.Strategy.MaxParallel, 4)
assert.Equal(t, job.Strategy.FailFast, false)
job = wf.Jobs["strategy-no-matrix"]
matrixes, err = job.GetMatrixes()
assert.NoError(t, err)
assert.Equal(t, matrixes, []map[string]interface{}{{}})
assert.Equal(t, job.GetMatrixes(), []map[string]interface{}{{}})
assert.Equal(t, job.Matrix(), map[string][]interface{}(nil))
assert.Equal(t, job.Strategy.MaxParallel, 2)
assert.Equal(t, job.Strategy.FailFast, false)
job = wf.Jobs["strategy-all"]
matrixes, err = job.GetMatrixes()
assert.NoError(t, err)
assert.Equal(t, matrixes,
assert.Equal(t, job.GetMatrixes(),
[]map[string]interface{}{
{"datacenter": "site-c", "node-version": "14.x", "site": "staging"},
{"datacenter": "site-c", "node-version": "16.x", "site": "staging"},

View File

@@ -171,12 +171,26 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
}
return execAsComposite(step)(ctx)
case model.ActionRunsUsingGo:
if err := maybeCopyToActionDir(ctx, step, actionDir, actionPath, containerActionDir); err != nil {
return err
}
execFileName := fmt.Sprintf("%s.out", action.Runs.Main)
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Main}
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
return common.NewPipelineExecutor(
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
)(ctx)
default:
return fmt.Errorf(fmt.Sprintf("The runs.using key must be one of: %v, got %s", []string{
model.ActionRunsUsingDocker,
model.ActionRunsUsingNode12,
model.ActionRunsUsingNode16,
model.ActionRunsUsingComposite,
model.ActionRunsUsingGo,
}, action.Runs.Using))
}
}
@@ -319,13 +333,13 @@ func evalDockerArgs(ctx context.Context, step step, action *model.Action, cmd *[
inputs[k] = eval.Interpolate(ctx, v)
}
}
mergeIntoMap(step, step.getEnv(), inputs)
mergeIntoMap(step.getEnv(), inputs)
stepEE := rc.NewStepExpressionEvaluator(ctx, step)
for i, v := range *cmd {
(*cmd)[i] = stepEE.Interpolate(ctx, v)
}
mergeIntoMap(step, step.getEnv(), action.Runs.Env)
mergeIntoMap(step.getEnv(), action.Runs.Env)
ee := rc.NewStepExpressionEvaluator(ctx, step)
for k, v := range *step.getEnv() {
@@ -367,7 +381,7 @@ func newStepContainer(ctx context.Context, step step, image string, cmd []string
Image: image,
Username: rc.Config.Secrets["DOCKER_USERNAME"],
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
Name: createContainerName(rc.jobContainerName(), stepModel.ID),
Name: createSimpleContainerName(rc.jobContainerName(), "STEP-"+stepModel.ID),
Env: envList,
Mounts: mounts,
NetworkMode: networkMode,
@@ -378,6 +392,7 @@ func newStepContainer(ctx context.Context, step step, image string, cmd []string
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Options: rc.Config.ContainerOptions,
AutoRemove: rc.Config.AutoRemove,
})
return stepContainer
}
@@ -452,7 +467,8 @@ func hasPreStep(step actionStep) common.Conditional {
action := step.getActionModel()
return action.Runs.Using == model.ActionRunsUsingComposite ||
((action.Runs.Using == model.ActionRunsUsingNode12 ||
action.Runs.Using == model.ActionRunsUsingNode16) &&
action.Runs.Using == model.ActionRunsUsingNode16 ||
action.Runs.Using == model.ActionRunsUsingGo) &&
action.Runs.Pre != "")
}
}
@@ -511,6 +527,41 @@ func runPreStep(step actionStep) common.Executor {
}
return fmt.Errorf("missing steps in composite action")
case model.ActionRunsUsingGo:
// defaults in pre steps were missing, however provided inputs are available
populateEnvsFromInput(ctx, step.getEnv(), action, rc)
// todo: refactor into step
var actionDir string
var actionPath string
if _, ok := step.(*stepActionRemote); ok {
actionPath = newRemoteAction(stepModel.Uses).Path
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(stepModel.Uses))
} else {
actionDir = filepath.Join(rc.Config.Workdir, stepModel.Uses)
actionPath = ""
}
actionLocation := ""
if actionPath != "" {
actionLocation = path.Join(actionDir, actionPath)
} else {
actionLocation = actionDir
}
_, containerActionDir := getContainerActionPaths(stepModel, actionLocation, rc)
if err := maybeCopyToActionDir(ctx, step, actionDir, actionPath, containerActionDir); err != nil {
return err
}
execFileName := fmt.Sprintf("%s.out", action.Runs.Pre)
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Pre}
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
return common.NewPipelineExecutor(
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
)(ctx)
default:
return nil
}
@@ -547,7 +598,8 @@ func hasPostStep(step actionStep) common.Conditional {
action := step.getActionModel()
return action.Runs.Using == model.ActionRunsUsingComposite ||
((action.Runs.Using == model.ActionRunsUsingNode12 ||
action.Runs.Using == model.ActionRunsUsingNode16) &&
action.Runs.Using == model.ActionRunsUsingNode16 ||
action.Runs.Using == model.ActionRunsUsingGo) &&
action.Runs.Post != "")
}
}
@@ -603,6 +655,18 @@ func runPostStep(step actionStep) common.Executor {
}
return fmt.Errorf("missing steps in composite action")
case model.ActionRunsUsingGo:
populateEnvsFromSavedState(step.getEnv(), step, rc)
execFileName := fmt.Sprintf("%s.out", action.Runs.Post)
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Post}
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
return common.NewPipelineExecutor(
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
)(ctx)
default:
return nil
}

View File

@@ -105,15 +105,13 @@ func execAsComposite(step actionStep) common.Executor {
rc.Masks = append(rc.Masks, compositeRC.Masks...)
rc.ExtraPath = compositeRC.ExtraPath
// compositeRC.Env is dirty, contains INPUT_ and merged step env, only rely on compositeRC.GlobalEnv
mergeIntoMap := mergeIntoMapCaseSensitive
if rc.JobContainer.IsEnvironmentCaseInsensitive() {
mergeIntoMap = mergeIntoMapCaseInsensitive
for k, v := range compositeRC.GlobalEnv {
rc.Env[k] = v
if rc.GlobalEnv == nil {
rc.GlobalEnv = map[string]string{}
}
rc.GlobalEnv[k] = v
}
if rc.GlobalEnv == nil {
rc.GlobalEnv = map[string]string{}
}
mergeIntoMap(rc.GlobalEnv, compositeRC.GlobalEnv)
mergeIntoMap(rc.Env, compositeRC.GlobalEnv)
return err
}
@@ -137,6 +135,7 @@ func (rc *RunContext) compositeExecutor(action *model.Action) *compositeSteps {
if step.ID == "" {
step.ID = fmt.Sprintf("%d", i)
}
step.Number = i
// create a copy of the step, since this composite action could
// run multiple times and we might modify the instance

View File

@@ -87,18 +87,12 @@ func (rc *RunContext) setEnv(ctx context.Context, kvPairs map[string]string, arg
if rc.Env == nil {
rc.Env = make(map[string]string)
}
rc.Env[name] = arg
// for composite action GITHUB_ENV and set-env passing
if rc.GlobalEnv == nil {
rc.GlobalEnv = map[string]string{}
}
newenv := map[string]string{
name: arg,
}
mergeIntoMap := mergeIntoMapCaseSensitive
if rc.JobContainer != nil && rc.JobContainer.IsEnvironmentCaseInsensitive() {
mergeIntoMap = mergeIntoMapCaseInsensitive
}
mergeIntoMap(rc.Env, newenv)
mergeIntoMap(rc.GlobalEnv, newenv)
rc.GlobalEnv[name] = arg
}
func (rc *RunContext) setOutput(ctx context.Context, kvPairs map[string]string, arg string) {
logger := common.Logger(ctx)

View File

@@ -62,6 +62,7 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
if stepModel.ID == "" {
stepModel.ID = fmt.Sprintf("%d", i)
}
stepModel.Number = i
step, err := sf.newStep(stepModel, rc)
@@ -173,7 +174,7 @@ func setJobOutputs(ctx context.Context, rc *RunContext) {
func useStepLogger(rc *RunContext, stepModel *model.Step, stage stepStage, executor common.Executor) common.Executor {
return func(ctx context.Context) error {
ctx = withStepLogger(ctx, stepModel.ID, rc.ExprEval.Interpolate(ctx, stepModel.String()), stage.String())
ctx = withStepLogger(ctx, stepModel.Number, stepModel.ID, rc.ExprEval.Interpolate(ctx, stepModel.String()), stage.String())
rawLogger := common.Logger(ctx).WithField("raw_output", true)
logWriter := common.NewLineWriter(rc.commandHandler(ctx), func(s string) bool {

View File

@@ -95,6 +95,17 @@ func WithJobLogger(ctx context.Context, jobID string, jobName string, config *Co
logger.SetFormatter(formatter)
}
{ // Adapt to Gitea
if hook := common.LoggerHook(ctx); hook != nil {
logger.AddHook(hook)
}
if config.JobLoggerLevel != nil {
logger.SetLevel(*config.JobLoggerLevel)
} else {
logger.SetLevel(logrus.TraceLevel)
}
}
logger.SetFormatter(&maskedFormatter{
Formatter: logger.Formatter,
masker: valueMasker(config.InsecureSecrets, config.Secrets),
@@ -131,11 +142,12 @@ func WithCompositeStepLogger(ctx context.Context, stepID string) context.Context
}).WithContext(ctx))
}
func withStepLogger(ctx context.Context, stepID string, stepName string, stageName string) context.Context {
func withStepLogger(ctx context.Context, stepNumber int, stepID, stepName, stageName string) context.Context {
rtn := common.Logger(ctx).WithFields(logrus.Fields{
"step": stepName,
"stepID": []string{stepID},
"stage": stageName,
"stepNumber": stepNumber,
"step": stepName,
"stepID": []string{stepID},
"stage": stageName,
})
return common.WithLogger(ctx, rtn)
}

View File

@@ -16,8 +16,11 @@ import (
"regexp"
"runtime"
"strings"
"time"
"github.com/mitchellh/go-homedir"
"github.com/opencontainers/selinux/go-selinux"
log "github.com/sirupsen/logrus"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/container"
@@ -84,25 +87,7 @@ func (rc *RunContext) GetEnv() map[string]string {
}
func (rc *RunContext) jobContainerName() string {
return createContainerName("act", rc.String())
}
func getDockerDaemonSocketMountPath(daemonPath string) string {
if protoIndex := strings.Index(daemonPath, "://"); protoIndex != -1 {
scheme := daemonPath[:protoIndex]
if strings.EqualFold(scheme, "npipe") {
// linux container mount on windows, use the default socket path of the VM / wsl2
return "/var/run/docker.sock"
} else if strings.EqualFold(scheme, "unix") {
return daemonPath[protoIndex+3:]
} else if strings.IndexFunc(scheme, func(r rune) bool {
return (r < 'a' || r > 'z') && (r < 'A' || r > 'Z')
}) == -1 {
// unknown protocol use default
return "/var/run/docker.sock"
}
}
return daemonPath
return createSimpleContainerName(rc.Config.ContainerNamePrefix, "WORKFLOW-"+rc.Run.Workflow.Name, "JOB-"+rc.Name)
}
// Returns the binds and mounts for the container, resolving paths as appopriate
@@ -113,10 +98,8 @@ func (rc *RunContext) GetBindsAndMounts() ([]string, map[string]string) {
rc.Config.ContainerDaemonSocket = "/var/run/docker.sock"
}
binds := []string{}
if rc.Config.ContainerDaemonSocket != "-" {
daemonPath := getDockerDaemonSocketMountPath(rc.Config.ContainerDaemonSocket)
binds = append(binds, fmt.Sprintf("%s:%s", daemonPath, "/var/run/docker.sock"))
binds := []string{
fmt.Sprintf("%s:%s", rc.Config.ContainerDaemonSocket, "/var/run/docker.sock"),
}
ext := container.LinuxContainerEnvironmentExtensions{}
@@ -270,7 +253,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
rc.JobContainer = container.NewContainer(&container.NewContainerInput{
Cmd: nil,
Entrypoint: []string{"tail", "-f", "/dev/null"},
Entrypoint: []string{"/bin/sleep", fmt.Sprint(rc.Config.ContainerMaxLifetime.Round(time.Second).Seconds())},
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: username,
@@ -278,7 +261,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
Name: name,
Env: envList,
Mounts: mounts,
NetworkMode: "host",
NetworkMode: rc.Config.ContainerNetworkMode,
Binds: binds,
Stdout: logWriter,
Stderr: logWriter,
@@ -286,6 +269,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Options: rc.options(ctx),
AutoRemove: rc.Config.AutoRemove,
})
if rc.JobContainer == nil {
return errors.New("Failed to create job container")
@@ -318,15 +302,6 @@ func (rc *RunContext) execJobContainer(cmd []string, env map[string]string, user
func (rc *RunContext) ApplyExtraPath(ctx context.Context, env *map[string]string) {
if rc.ExtraPath != nil && len(rc.ExtraPath) > 0 {
path := rc.JobContainer.GetPathVariableName()
if rc.JobContainer.IsEnvironmentCaseInsensitive() {
// On windows system Path and PATH could also be in the map
for k := range *env {
if strings.EqualFold(path, k) {
path = k
break
}
}
}
if (*env)[path] == "" {
cenv := map[string]string{}
var cpath string
@@ -386,11 +361,10 @@ func (rc *RunContext) ActionCacheDir() string {
var xdgCache string
var ok bool
if xdgCache, ok = os.LookupEnv("XDG_CACHE_HOME"); !ok || xdgCache == "" {
if home, err := os.UserHomeDir(); err == nil {
if home, err := homedir.Dir(); err == nil {
xdgCache = filepath.Join(home, ".cache")
} else if xdgCache, err = filepath.Abs("."); err != nil {
// It's almost impossible to get here, so the temp dir is a good fallback
xdgCache = os.TempDir()
log.Fatal(err)
}
}
return filepath.Join(xdgCache, "act")
@@ -486,9 +460,19 @@ func (rc *RunContext) platformImage(ctx context.Context) string {
common.Logger(ctx).Errorf("'runs-on' key not defined in %s", rc.String())
}
for _, runnerLabel := range job.RunsOn() {
platformName := rc.ExprEval.Interpolate(ctx, runnerLabel)
image := rc.Config.Platforms[strings.ToLower(platformName)]
runsOn := job.RunsOn()
for i, v := range runsOn {
runsOn[i] = rc.ExprEval.Interpolate(ctx, v)
}
if pick := rc.Config.PlatformPicker; pick != nil {
if image := pick(runsOn); image != "" {
return image
}
}
for _, runnerLabel := range runsOn {
image := rc.Config.Platforms[strings.ToLower(runnerLabel)]
if image != "" {
return image
}
@@ -548,6 +532,7 @@ func mergeMaps(maps ...map[string]string) map[string]string {
return rtnMap
}
// deprecated: use createSimpleContainerName
func createContainerName(parts ...string) string {
name := strings.Join(parts, "-")
pattern := regexp.MustCompile("[^a-zA-Z0-9]")
@@ -561,6 +546,22 @@ func createContainerName(parts ...string) string {
return fmt.Sprintf("%s-%x", trimmedName, hash)
}
func createSimpleContainerName(parts ...string) string {
pattern := regexp.MustCompile("[^a-zA-Z0-9-]")
name := make([]string, 0, len(parts))
for _, v := range parts {
v = pattern.ReplaceAllString(v, "-")
v = strings.Trim(v, "-")
for strings.Contains(v, "--") {
v = strings.ReplaceAll(v, "--", "-")
}
if v != "" {
name = append(name, v)
}
}
return strings.Join(name, "_")
}
func trimToLen(s string, l int) string {
if l < 0 {
l = 0
@@ -641,6 +642,27 @@ func (rc *RunContext) getGithubContext(ctx context.Context) *model.GithubContext
ghc.Actor = "nektos/act"
}
{ // Adapt to Gitea
if preset := rc.Config.PresetGitHubContext; preset != nil {
ghc.Event = preset.Event
ghc.RunID = preset.RunID
ghc.RunNumber = preset.RunNumber
ghc.Actor = preset.Actor
ghc.Repository = preset.Repository
ghc.EventName = preset.EventName
ghc.Sha = preset.Sha
ghc.Ref = preset.Ref
ghc.RefName = preset.RefName
ghc.RefType = preset.RefType
ghc.HeadRef = preset.HeadRef
ghc.BaseRef = preset.BaseRef
ghc.Token = preset.Token
ghc.RepositoryOwner = preset.RepositoryOwner
ghc.RetentionDays = preset.RetentionDays
return ghc
}
}
if rc.EventJSON != "" {
err := json.Unmarshal([]byte(rc.EventJSON), &ghc.Event)
if err != nil {
@@ -660,27 +682,6 @@ func (rc *RunContext) getGithubContext(ctx context.Context) *model.GithubContext
ghc.SetRefTypeAndName()
// defaults
ghc.ServerURL = "https://github.com"
ghc.APIURL = "https://api.github.com"
ghc.GraphQLURL = "https://api.github.com/graphql"
// per GHES
if rc.Config.GitHubInstance != "github.com" {
ghc.ServerURL = fmt.Sprintf("https://%s", rc.Config.GitHubInstance)
ghc.APIURL = fmt.Sprintf("https://%s/api/v3", rc.Config.GitHubInstance)
ghc.GraphQLURL = fmt.Sprintf("https://%s/api/graphql", rc.Config.GitHubInstance)
}
// allow to be overridden by user
if rc.Config.Env["GITHUB_SERVER_URL"] != "" {
ghc.ServerURL = rc.Config.Env["GITHUB_SERVER_URL"]
}
if rc.Config.Env["GITHUB_API_URL"] != "" {
ghc.APIURL = rc.Config.Env["GITHUB_API_URL"]
}
if rc.Config.Env["GITHUB_GRAPHQL_URL"] != "" {
ghc.GraphQLURL = rc.Config.Env["GITHUB_GRAPHQL_URL"]
}
return ghc
}
@@ -754,9 +755,39 @@ func (rc *RunContext) withGithubEnv(ctx context.Context, github *model.GithubCon
env["RUNNER_TRACKING_ID"] = github.RunnerTrackingID
env["GITHUB_BASE_REF"] = github.BaseRef
env["GITHUB_HEAD_REF"] = github.HeadRef
env["GITHUB_SERVER_URL"] = github.ServerURL
env["GITHUB_API_URL"] = github.APIURL
env["GITHUB_GRAPHQL_URL"] = github.GraphQLURL
defaultServerURL := "https://github.com"
defaultAPIURL := "https://api.github.com"
defaultGraphqlURL := "https://api.github.com/graphql"
if rc.Config.GitHubInstance != "github.com" {
defaultServerURL = fmt.Sprintf("https://%s", rc.Config.GitHubInstance)
defaultAPIURL = fmt.Sprintf("https://%s/api/v3", rc.Config.GitHubInstance)
defaultGraphqlURL = fmt.Sprintf("https://%s/api/graphql", rc.Config.GitHubInstance)
}
{ // Adapt to Gitea
instance := rc.Config.GitHubInstance
if !strings.HasPrefix(instance, "http://") &&
!strings.HasPrefix(instance, "https://") {
instance = "https://" + instance
}
defaultServerURL = instance
defaultAPIURL = instance + "/api/v1" // the version of Gitea is v1
defaultGraphqlURL = "" // Gitea doesn't support graphql
}
if env["GITHUB_SERVER_URL"] == "" {
env["GITHUB_SERVER_URL"] = defaultServerURL
}
if env["GITHUB_API_URL"] == "" {
env["GITHUB_API_URL"] = defaultAPIURL
}
if env["GITHUB_GRAPHQL_URL"] == "" {
env["GITHUB_GRAPHQL_URL"] = defaultGraphqlURL
}
if rc.Config.ArtifactServerPath != "" {
setActionRuntimeVars(rc, env)

View File

@@ -624,3 +624,24 @@ func TestRunContextGetEnv(t *testing.T) {
})
}
}
func Test_createSimpleContainerName(t *testing.T) {
tests := []struct {
parts []string
want string
}{
{
parts: []string{"a--a", "BB正", "c-C"},
want: "a-a_BB_c-C",
},
{
parts: []string{"a-a", "", "-"},
want: "a-a",
},
}
for _, tt := range tests {
t.Run(strings.Join(tt.parts, " "), func(t *testing.T) {
assert.Equalf(t, tt.want, createSimpleContainerName(tt.parts...), "createSimpleContainerName(%v)", tt.parts)
})
}
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"fmt"
"os"
"time"
log "github.com/sirupsen/logrus"
@@ -20,41 +21,49 @@ type Runner interface {
// Config contains the config for a new runner
type Config struct {
Actor string // the user that triggered the event
Workdir string // path to working directory
BindWorkdir bool // bind the workdir to the job container
EventName string // name of event to run
EventPath string // path to JSON file to use for event.json in containers
DefaultBranch string // name of the main branch for this repository
ReuseContainers bool // reuse containers to maintain state
ForcePull bool // force pulling of the image, even if already present
ForceRebuild bool // force rebuilding local docker image action
LogOutput bool // log the output from docker run
JSONLogger bool // use json or text logger
Env map[string]string // env for containers
Inputs map[string]string // manually passed action inputs
Secrets map[string]string // list of secrets
Token string // GitHub token
InsecureSecrets bool // switch hiding output when printing to terminal
Platforms map[string]string // list of platforms
Privileged bool // use privileged mode
UsernsMode string // user namespace to use
ContainerArchitecture string // Desired OS/architecture platform for running containers
ContainerDaemonSocket string // Path to Docker daemon socket
ContainerOptions string // Options for the job container
UseGitIgnore bool // controls if paths in .gitignore should not be copied into container, default true
GitHubInstance string // GitHub instance to use, default "github.com"
ContainerCapAdd []string // list of kernel capabilities to add to the containers
ContainerCapDrop []string // list of kernel capabilities to remove from the containers
AutoRemove bool // controls if the container is automatically removed upon workflow completion
ArtifactServerPath string // the path where the artifact server stores uploads
ArtifactServerAddr string // the address the artifact server binds to
ArtifactServerPort string // the port the artifact server binds to
NoSkipCheckout bool // do not skip actions/checkout
RemoteName string // remote name in local git repo config
ReplaceGheActionWithGithubCom []string // Use actions from GitHub Enterprise instance to GitHub
ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub.
Matrix map[string]map[string]bool // Matrix config to run
Actor string // the user that triggered the event
Workdir string // path to working directory
BindWorkdir bool // bind the workdir to the job container
EventName string // name of event to run
EventPath string // path to JSON file to use for event.json in containers
DefaultBranch string // name of the main branch for this repository
ReuseContainers bool // reuse containers to maintain state
ForcePull bool // force pulling of the image, even if already present
ForceRebuild bool // force rebuilding local docker image action
LogOutput bool // log the output from docker run
JSONLogger bool // use json or text logger
Env map[string]string // env for containers
Inputs map[string]string // manually passed action inputs
Secrets map[string]string // list of secrets
Token string // GitHub token
InsecureSecrets bool // switch hiding output when printing to terminal
Platforms map[string]string // list of platforms
Privileged bool // use privileged mode
UsernsMode string // user namespace to use
ContainerArchitecture string // Desired OS/architecture platform for running containers
ContainerDaemonSocket string // Path to Docker daemon socket
ContainerOptions string // Options for the job container
UseGitIgnore bool // controls if paths in .gitignore should not be copied into container, default true
GitHubInstance string // GitHub instance to use, default "github.com"
ContainerCapAdd []string // list of kernel capabilities to add to the containers
ContainerCapDrop []string // list of kernel capabilities to remove from the containers
AutoRemove bool // controls if the container is automatically removed upon workflow completion
ArtifactServerPath string // the path where the artifact server stores uploads
ArtifactServerAddr string // the address the artifact server binds to
ArtifactServerPort string // the port the artifact server binds to
NoSkipCheckout bool // do not skip actions/checkout
RemoteName string // remote name in local git repo config
ReplaceGheActionWithGithubCom []string // Use actions from GitHub Enterprise instance to GitHub
ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub.
PresetGitHubContext *model.GithubContext // the preset github context, overrides some fields like DefaultBranch, Env, Secrets etc.
EventJSON string // the content of JSON file to use for event.json in containers, overrides EventPath
ContainerNamePrefix string // the prefix of container name
ContainerMaxLifetime time.Duration // the max lifetime of job containers
ContainerNetworkMode string // the network mode of job containers
DefaultActionInstance string // the default actions web site
PlatformPicker func(labels []string) string // platform picker, it will take precedence over Platforms if isn't nil
JobLoggerLevel *log.Level // the level of job logger
}
type caller struct {
@@ -78,7 +87,9 @@ func New(runnerConfig *Config) (Runner, error) {
func (runner *runnerImpl) configure() (Runner, error) {
runner.eventJSON = "{}"
if runner.config.EventPath != "" {
if runner.config.EventJSON != "" {
runner.eventJSON = runner.config.EventJSON
} else if runner.config.EventPath != "" {
log.Debugf("Reading event.json from %s", runner.config.EventPath)
eventJSONBytes, err := os.ReadFile(runner.config.EventPath)
if err != nil {
@@ -117,15 +128,7 @@ func (runner *runnerImpl) NewPlanExecutor(plan *model.Plan) common.Executor {
log.Errorf("Error while evaluating matrix: %v", err)
}
}
var matrixes []map[string]interface{}
if m, err := job.GetMatrixes(); err != nil {
log.Errorf("Error while get job's matrix: %v", err)
} else {
matrixes = selectMatrixes(m, runner.config.Matrix)
}
log.Debugf("Final matrix after applying user inclusions '%v'", matrixes)
matrixes := job.GetMatrixes()
maxParallel := 4
if job.Strategy != nil {
maxParallel = job.Strategy.MaxParallel
@@ -180,25 +183,6 @@ func handleFailure(plan *model.Plan) common.Executor {
}
}
func selectMatrixes(originalMatrixes []map[string]interface{}, targetMatrixValues map[string]map[string]bool) []map[string]interface{} {
matrixes := make([]map[string]interface{}, 0)
for _, original := range originalMatrixes {
flag := true
for key, val := range original {
if allowedVals, ok := targetMatrixValues[key]; ok {
valToString := fmt.Sprintf("%v", val)
if _, ok := allowedVals[valToString]; !ok {
flag = false
}
}
}
if flag {
matrixes = append(matrixes, original)
}
}
return matrixes
}
func (runner *runnerImpl) newRunContext(ctx context.Context, run *model.Run, matrix map[string]interface{}) *RunContext {
rc := &RunContext{
Config: runner.config,

View File

@@ -186,7 +186,6 @@ func (j *TestJobFileInfo) runTest(ctx context.Context, t *testing.T, cfg *Config
Inputs: cfg.Inputs,
GitHubInstance: "github.com",
ContainerArchitecture: cfg.ContainerArchitecture,
Matrix: cfg.Matrix,
}
runner, err := New(runnerConfig)
@@ -294,6 +293,7 @@ func TestRunEvent(t *testing.T) {
{workdir, "workflow_dispatch-scalar-composite-action", "workflow_dispatch", "", platforms, secrets},
{workdir, "job-needs-context-contains-result", "push", "", platforms, secrets},
{"../model/testdata", "strategy", "push", "", platforms, secrets}, // TODO: move all testdata into pkg so we can validate it with planner and runner
// {"testdata", "issue-228", "push", "", platforms, }, // TODO [igni]: Remove this once everything passes
{"../model/testdata", "container-volumes", "push", "", platforms, secrets},
{workdir, "path-handling", "push", "", platforms, secrets},
{workdir, "do-not-leak-step-env-in-composite", "push", "", platforms, secrets},
@@ -584,30 +584,3 @@ func TestRunEventPullRequest(t *testing.T) {
tjfi.runTest(context.Background(), t, &Config{EventPath: filepath.Join(workdir, workflowPath, "event.json")})
}
func TestRunMatrixWithUserDefinedInclusions(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
workflowPath := "matrix-with-user-inclusions"
tjfi := TestJobFileInfo{
workdir: workdir,
workflowPath: workflowPath,
eventName: "push",
errorMessage: "",
platforms: platforms,
}
matrix := map[string]map[string]bool{
"node": {
"8": true,
"8.x": true,
},
"os": {
"ubuntu-18.04": true,
},
}
tjfi.runTest(context.Background(), t, &Config{Matrix: matrix})
}

View File

@@ -187,7 +187,7 @@ func setupEnv(ctx context.Context, step step) error {
mergeEnv(ctx, step)
// merge step env last, since it should not be overwritten
mergeIntoMap(step, step.getEnv(), step.getStepModel().GetEnv())
mergeIntoMap(step.getEnv(), step.getStepModel().GetEnv())
exprEval := rc.NewExpressionEvaluator(ctx)
for k, v := range *step.getEnv() {
@@ -216,9 +216,9 @@ func mergeEnv(ctx context.Context, step step) {
c := job.Container()
if c != nil {
mergeIntoMap(step, env, rc.GetEnv(), c.Env)
mergeIntoMap(env, rc.GetEnv(), c.Env)
} else {
mergeIntoMap(step, env, rc.GetEnv())
mergeIntoMap(env, rc.GetEnv())
}
rc.withGithubEnv(ctx, step.getGithubContext(ctx), *env)
@@ -258,38 +258,10 @@ func isContinueOnError(ctx context.Context, expr string, step step, stage stepSt
return continueOnError, nil
}
func mergeIntoMap(step step, target *map[string]string, maps ...map[string]string) {
if rc := step.getRunContext(); rc != nil && rc.JobContainer != nil && rc.JobContainer.IsEnvironmentCaseInsensitive() {
mergeIntoMapCaseInsensitive(*target, maps...)
} else {
mergeIntoMapCaseSensitive(*target, maps...)
}
}
func mergeIntoMapCaseSensitive(target map[string]string, maps ...map[string]string) {
func mergeIntoMap(target *map[string]string, maps ...map[string]string) {
for _, m := range maps {
for k, v := range m {
target[k] = v
}
}
}
func mergeIntoMapCaseInsensitive(target map[string]string, maps ...map[string]string) {
foldKeys := make(map[string]string, len(target))
for k := range target {
foldKeys[strings.ToLower(k)] = k
}
toKey := func(s string) string {
foldKey := strings.ToLower(s)
if k, ok := foldKeys[foldKey]; ok {
return k
}
foldKeys[strings.ToLower(foldKey)] = s
return s
}
for _, m := range maps {
for k, v := range m {
target[toKey(k)] = v
(*target)[k] = v
}
}
}

View File

@@ -30,9 +30,7 @@ type stepActionRemote struct {
remoteAction *remoteAction
}
var (
stepActionRemoteNewCloneExecutor = git.NewGitCloneExecutor
)
var stepActionRemoteNewCloneExecutor = git.NewGitCloneExecutor
func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
return func(ctx context.Context) error {
@@ -46,15 +44,12 @@ func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
return fmt.Errorf("Expected format {org}/{repo}[/path]@ref. Actual '%s' Input string was not in a correct format", sar.Step.Uses)
}
sar.remoteAction.URL = sar.RunContext.Config.GitHubInstance
github := sar.getGithubContext(ctx)
if sar.remoteAction.IsCheckout() && isLocalCheckout(github, sar.Step) && !sar.RunContext.Config.NoSkipCheckout {
common.Logger(ctx).Debugf("Skipping local actions/checkout because workdir was already copied")
return nil
}
sar.remoteAction.URL = sar.RunContext.Config.GitHubInstance
for _, action := range sar.RunContext.Config.ReplaceGheActionWithGithubCom {
if strings.EqualFold(fmt.Sprintf("%s/%s", sar.remoteAction.Org, sar.remoteAction.Repo), action) {
sar.remoteAction.URL = "github.com"
@@ -64,10 +59,16 @@ func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), safeFilename(sar.Step.Uses))
gitClone := stepActionRemoteNewCloneExecutor(git.NewGitCloneExecutorInput{
URL: sar.remoteAction.CloneURL(),
URL: sar.remoteAction.CloneURL(sar.RunContext.Config.DefaultActionInstance),
Ref: sar.remoteAction.Ref,
Dir: actionDir,
Token: github.Token,
Token: "", /*
Shouldn't provide token when cloning actions,
the token comes from the instance which triggered the task,
however, it might be not the same instance which provides actions.
For GitHub, they are the same, always github.com.
But for Gitea, tasks triggered by a.com can clone actions from b.com.
*/
})
var ntErr common.Executor
if err := gitClone(ctx); err != nil {
@@ -213,8 +214,15 @@ type remoteAction struct {
Ref string
}
func (ra *remoteAction) CloneURL() string {
return fmt.Sprintf("https://%s/%s/%s", ra.URL, ra.Org, ra.Repo)
func (ra *remoteAction) CloneURL(defaultURL string) string {
u := ra.URL
if u == "" {
u = defaultURL
}
if !strings.HasPrefix(u, "http://") && !strings.HasPrefix(u, "https://") {
u = "https://" + u
}
return fmt.Sprintf("%s/%s/%s", u, ra.Org, ra.Repo)
}
func (ra *remoteAction) IsCheckout() bool {
@@ -225,6 +233,26 @@ func (ra *remoteAction) IsCheckout() bool {
}
func newRemoteAction(action string) *remoteAction {
// support http(s)://host/owner/repo@v3
for _, schema := range []string{"https://", "http://"} {
if strings.HasPrefix(action, schema) {
splits := strings.SplitN(strings.TrimPrefix(action, schema), "/", 2)
if len(splits) != 2 {
return nil
}
ret := parseAction(splits[1])
if ret == nil {
return nil
}
ret.URL = schema + splits[0]
return ret
}
}
return parseAction(action)
}
func parseAction(action string) *remoteAction {
// GitHub's document[^] describes:
// > We strongly recommend that you include the version of
// > the action you are using by specifying a Git ref, SHA, or Docker tag number.
@@ -240,7 +268,7 @@ func newRemoteAction(action string) *remoteAction {
Repo: matches[2],
Path: matches[4],
Ref: matches[6],
URL: "github.com",
URL: "",
}
}

View File

@@ -616,6 +616,100 @@ func TestStepActionRemotePost(t *testing.T) {
}
}
func Test_newRemoteAction(t *testing.T) {
tests := []struct {
action string
want *remoteAction
wantCloneURL string
}{
{
action: "actions/heroku@main",
want: &remoteAction{
URL: "",
Org: "actions",
Repo: "heroku",
Path: "",
Ref: "main",
},
wantCloneURL: "https://github.com/actions/heroku",
},
{
action: "actions/aws/ec2@main",
want: &remoteAction{
URL: "",
Org: "actions",
Repo: "aws",
Path: "ec2",
Ref: "main",
},
wantCloneURL: "https://github.com/actions/aws",
},
{
action: "./.github/actions/my-action", // it's valid for GitHub, but act don't support it
want: nil,
},
{
action: "docker://alpine:3.8", // it's valid for GitHub, but act don't support it
want: nil,
},
{
action: "https://gitea.com/actions/heroku@main", // it's invalid for GitHub, but gitea supports it
want: &remoteAction{
URL: "https://gitea.com",
Org: "actions",
Repo: "heroku",
Path: "",
Ref: "main",
},
wantCloneURL: "https://gitea.com/actions/heroku",
},
{
action: "https://gitea.com/actions/aws/ec2@main", // it's invalid for GitHub, but gitea supports it
want: &remoteAction{
URL: "https://gitea.com",
Org: "actions",
Repo: "aws",
Path: "ec2",
Ref: "main",
},
wantCloneURL: "https://gitea.com/actions/aws",
},
{
action: "http://gitea.com/actions/heroku@main", // it's invalid for GitHub, but gitea supports it
want: &remoteAction{
URL: "http://gitea.com",
Org: "actions",
Repo: "heroku",
Path: "",
Ref: "main",
},
wantCloneURL: "http://gitea.com/actions/heroku",
},
{
action: "http://gitea.com/actions/aws/ec2@main", // it's invalid for GitHub, but gitea supports it
want: &remoteAction{
URL: "http://gitea.com",
Org: "actions",
Repo: "aws",
Path: "ec2",
Ref: "main",
},
wantCloneURL: "http://gitea.com/actions/aws",
},
}
for _, tt := range tests {
t.Run(tt.action, func(t *testing.T) {
got := newRemoteAction(tt.action)
assert.Equalf(t, tt.want, got, "newRemoteAction(%v)", tt.action)
cloneURL := ""
if got != nil {
cloneURL = got.CloneURL("github.com")
}
assert.Equalf(t, tt.wantCloneURL, cloneURL, "newRemoteAction(%v).CloneURL()", tt.action)
})
}
}
func Test_safeFilename(t *testing.T) {
tests := []struct {
s string

View File

@@ -120,7 +120,7 @@ func (sd *stepDocker) newStepContainer(ctx context.Context, image string, cmd []
Image: image,
Username: rc.Config.Secrets["DOCKER_USERNAME"],
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
Name: createContainerName(rc.jobContainerName(), step.ID),
Name: createSimpleContainerName(rc.jobContainerName(), "STEP-"+step.ID),
Env: envList,
Mounts: mounts,
NetworkMode: fmt.Sprintf("container:%s", rc.jobContainerName()),
@@ -130,6 +130,7 @@ func (sd *stepDocker) newStepContainer(ctx context.Context, image string, cmd []
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
AutoRemove: rc.Config.AutoRemove,
})
return stepContainer
}

View File

@@ -13,11 +13,10 @@ import (
)
type stepRun struct {
Step *model.Step
RunContext *RunContext
cmd []string
env map[string]string
WorkingDirectory string
Step *model.Step
RunContext *RunContext
cmd []string
env map[string]string
}
func (sr *stepRun) pre() common.Executor {
@@ -28,11 +27,12 @@ func (sr *stepRun) pre() common.Executor {
func (sr *stepRun) main() common.Executor {
sr.env = map[string]string{}
return runStepExecutor(sr, stepStageMain, common.NewPipelineExecutor(
sr.setupShellCommandExecutor(),
func(ctx context.Context) error {
sr.getRunContext().ApplyExtraPath(ctx, &sr.env)
return sr.getRunContext().JobContainer.Exec(sr.cmd, sr.env, "", sr.WorkingDirectory)(ctx)
return sr.getRunContext().JobContainer.Exec(sr.cmd, sr.env, "", sr.Step.WorkingDirectory)(ctx)
},
))
}
@@ -167,20 +167,16 @@ func (sr *stepRun) setupShell(ctx context.Context) {
func (sr *stepRun) setupWorkingDirectory(ctx context.Context) {
rc := sr.RunContext
step := sr.Step
workingdirectory := ""
if step.WorkingDirectory == "" {
workingdirectory = rc.Run.Job().Defaults.Run.WorkingDirectory
} else {
workingdirectory = step.WorkingDirectory
step.WorkingDirectory = rc.Run.Job().Defaults.Run.WorkingDirectory
}
// jobs can receive context values, so we interpolate
workingdirectory = rc.NewExpressionEvaluator(ctx).Interpolate(ctx, workingdirectory)
step.WorkingDirectory = rc.NewExpressionEvaluator(ctx).Interpolate(ctx, step.WorkingDirectory)
// but top level keys in workflow file like `defaults` or `env` can't
if workingdirectory == "" {
workingdirectory = rc.Run.Workflow.Defaults.Run.WorkingDirectory
if step.WorkingDirectory == "" {
step.WorkingDirectory = rc.Run.Workflow.Defaults.Run.WorkingDirectory
}
sr.WorkingDirectory = workingdirectory
}

View File

@@ -63,9 +63,7 @@ func TestMergeIntoMap(t *testing.T) {
for _, tt := range table {
t.Run(tt.name, func(t *testing.T) {
mergeIntoMapCaseSensitive(tt.target, tt.maps...)
assert.Equal(t, tt.expected, tt.target)
mergeIntoMapCaseInsensitive(tt.target, tt.maps...)
mergeIntoMap(&tt.target, tt.maps...)
assert.Equal(t, tt.expected, tt.target)
})
}

View File

@@ -1,10 +0,0 @@
name: reusable
on:
- workflow_call
jobs:
reusable_workflow_job:
runs-on: ubuntu-latest
steps:
- run: echo Test

View File

@@ -1,9 +0,0 @@
name: reusable
on: workflow_call
jobs:
reusable_workflow_job:
runs-on: ubuntu-latest
steps:
- run: echo Test

14
pkg/runner/testdata/issue-228/main.yaml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: issue-228
on:
- push
jobs:
kind:
runs-on: ubuntu-latest
steps:
- run: apt-get update -y && apt-get install git -y # setup git credentials will fail otherwise
- name: Setup git credentials
uses: fusion-engineering/setup-git-credentials@v2
with:
credentials: https://test@github.com/

View File

@@ -1,34 +0,0 @@
name: matrix-with-user-inclusions
on: push
jobs:
build:
name: PHP ${{ matrix.os }} ${{ matrix.node}}
runs-on: ubuntu-latest
steps:
- run: |
echo ${NODE_VERSION} | grep 8
echo ${OS_VERSION} | grep ubuntu-18.04
env:
NODE_VERSION: ${{ matrix.node }}
OS_VERSION: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-18.04, macos-latest]
node: [4, 6, 8, 10]
exclude:
- os: macos-latest
node: 4
include:
- os: ubuntu-16.04
node: 10
test:
runs-on: ubuntu-latest
strategy:
matrix:
node: [8.x, 10.x, 12.x, 13.x]
steps:
- run: echo ${NODE_VERSION} | grep 8.x
env:
NODE_VERSION: ${{ matrix.node }}

View File

@@ -19,12 +19,6 @@ jobs:
number_required: 1
secrets: inherit
reusable-workflow-with-on-string-notation:
uses: ./.github/workflows/local-reusable-workflow-no-inputs-string.yml
reusable-workflow-with-on-array-notation:
uses: ./.github/workflows/local-reusable-workflow-no-inputs-array.yml
output-test:
runs-on: ubuntu-latest
needs:

View File

@@ -1,7 +0,0 @@
runs:
using: composite
steps:
- run: |
echo $env:GITHUB_ENV
echo "kEy=n/a" > $env:GITHUB_ENV
shell: pwsh

View File

@@ -25,20 +25,3 @@ jobs:
echo "Unexpected value for `$env:key2: $env:key2"
exit 1
}
- run: |
echo $env:GITHUB_ENV
echo "KEY=test" > $env:GITHUB_ENV
echo "Key=expected" > $env:GITHUB_ENV
- name: Assert GITHUB_ENV is merged case insensitive
run: exit 1
if: env.KEY != 'expected' || env.Key != 'expected' || env.key != 'expected'
- name: Assert step env is merged case insensitive
run: exit 1
if: env.KEY != 'n/a' || env.Key != 'n/a' || env.key != 'n/a'
env:
KeY: 'n/a'
- uses: actions/checkout@v3
- uses: ./windows-add-env
- name: Assert composite env is merged case insensitive
run: exit 1
if: env.KEY != 'n/a' || env.Key != 'n/a' || env.key != 'n/a'

View File

@@ -11,9 +11,6 @@ jobs:
mkdir build
echo '@echo off' > build/test.cmd
echo 'echo Hi' >> build/test.cmd
mkdir build2
echo '@echo off' > build2/test2.cmd
echo 'echo test2' >> build2/test2.cmd
- run: |
echo '${{ tojson(runner) }}'
ls
@@ -26,9 +23,3 @@ jobs:
- run: |
echo $env:PATH
test
- run: |
echo "PATH=$env:PATH;${{ github.workspace }}\build2" > $env:GITHUB_ENV
- run: |
echo $env:PATH
test
test2

View File

@@ -22,13 +22,3 @@ jobs:
runs-on: ubuntu-latest
steps:
- run: '[[ "$(pwd)" == "/tmp" ]]'
workdir-from-matrix:
runs-on: ubuntu-latest
strategy:
max-parallel: 1
matrix:
work_dir: ["/tmp", "/root"]
steps:
- run: '[[ "$(pwd)" == "${{ matrix.work_dir }}" ]]'
working-directory: ${{ matrix.work_dir }}