Rendered at 22:56:46 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
pss314 4 minutes ago [-]
John Howard (one of the maintainers of Istio and currently with Solo.io) blogged about "Fast GitHub Actions with Blacksmith" [1]. The blog also contains a link to "GitHub Action Runner Alternatives" [2].
Back when GitHub Actions first came out, I used commit hashes rather than tags in all my `uses:` lines. Some of my colleagues disagreed, saying that tags were secure enough. I eventually said, "Well, for well-known actions like actions/checkout, sure; if that one gets compromised it'll be all over the news within minutes." But for all the third-party actions, I kept commit hashes.
I feel rather vindicated now. There's still a small possibility of getting supply-chain attacked via a SHA collision, or a relatively much larger (though still small in absolute terms) possibility of getting supply-chain attacked via NPM dependencies of the action you're relying on.
But if you're not using a commit hash in your `uses:` lines, go switch to it now. And if you're just using major-version-only tags like `v5` then do it RIGHT now, before that action gets a compromised version uploaded with a `v5.2.3` tag.
maxloh 6 hours ago [-]
GitHub Actions doesn't have a lock file, so your repo is still prone to transitive attacks if the SHA-locked actions you use also happen to use other composite actions by tags, which could be compromised in the future.
What an absolute joke that it has taken GitHub this long to clean up it's act when it comes to supply chain security.
Munksgaard 4 hours ago [-]
Even with a lock file, the action can download and execute arbitrary code from the internet.
shykes 3 hours ago [-]
It would be cool if CI could inject a platform-wide lockfile into every remote download or lookup made by your scripts. So if you pull a container or git tag, the CI platform would automatically ensure that the exact digest downloaded is controlled by a lock file that you can inspect, check in, etc.
xenophonf 1 hours ago [-]
"Require actions to be pinned to a full-length commit SHA" applies to composite actions, too. I had to replace pre-commit/action as a result.
arionmiles 6 hours ago [-]
I feel pretty happy we use Renovator (EDIT: It's Renovate) at my current workplace which by default will raise PRs to change any tags for actions with the SHA instead. Then, even when it bumps the version in future PRs, it bumps the SHA (with a comment of which tag version it represents)
jamietanna 5 hours ago [-]
Glad to hear you're enjoying Renovate - I'm biased, but I agree that the SHA pinning PR updates are a very nice feature
We recently found (in Renovate) some edge cases with how tags work in GitHub Actions which was fun (https://news.ycombinator.com/item?id=47892740) and there's a few things in there Dependabot doesn't seem to support too
mmarian 4 hours ago [-]
If you auto merge those PRs you're back to square 1 as you're not vetting your dependency updates. And if you don't, you incur operational overhead unless you put in a fair amount of effort centralizing. Wrote a couple of posts that touched on this https://developerwithacat.com/blog/202604/github-actions-sup...
tecleandor 5 hours ago [-]
Is it Renovator or Renovate? I'm trying to find it to check it out...
arionmiles 5 hours ago [-]
Oops, my bad. We keep calling it Renovator internally but the name is RenovateBot or Renovate.
just noting that pinning within your own actions is not enough, you also need to ensure any composite actions do not use mutable references (for actions, docker images, etc.)
AlecBG 5 hours ago [-]
You can enforce at the org level to only allow actions pinned to hashes. You can also choose a small whitelist of actions to allow.
mmarian 3 hours ago [-]
I used to think whitelist could be a partial solution. But after Checkmarx KICS got compromised I can't see this working. I would've considered a well-established brand, in security industry of all places, to be in the whitelist.
samuelknight 7 hours ago [-]
There is no realistic risk of a SHA collision attack. Getting supply chain attacked via NPM dependencies is much more likely. Hopefully the actions creators are also pinning their hashes.
kbolino 3 hours ago [-]
> There is no realistic risk of a SHA collision attack.
Indeed. To illustrate why:
1. It is not possible to "retroactively" find a SHA-1 collision for an already known hash. If somebody has produced a SHA-1 hash non-maliciously at any point in the past, it is safe from collisions. This is due to second-preimage resistance, which hasn't been broken for SHA-1 and doesn't seem likely to be broken any time soon.
2. The only way to obtain a SHA-1 collision is to do so knowingly when producing the original hash. You generate a pair of inputs at the same time that both hash to the same value. Certainly, this is an imaginable scenario; e.g. a trusted committer could push one half of the pair wittingly or a reviewer could be fooled into accepting one half of the pair unwittingly, both scenarios creating a timebomb where the malicious actor swaps the commit to the second half of the pair (which presumably carries a malicious payload) later. However, there are two blockers to this approach: Git (not just GitHub) will not accept a commit with a duplicate hash, always sticking with the original one, and GitHub specifically has implemented signature detection for the known SHA-1 collision-generating methods and will reject both halves of such a pair.
In short, there's just no practical way to exploit this weakness of SHA-1 with Git.
mmarian 4 hours ago [-]
There are downsides to it though. You...
- lose vulnerability alerts
- increase maintenance overhead
- take on all that for value that will go to 0 once Immutable Releases gets widely adopted
You lose vulnerability alerts, on GitHub. This is a (ridiculous, IMO) platform limitation that GitHub could lift by applying more engineering time to Dependabot and Dependabot's integrated security alerts feature.
zizmor (and other tools) correctly recovers vulnerability information for SHA-pinned actions[1].
On zizmor, there's no mention of coverage on commit SHA the section you've linked, nor in the entire page when I do Ctrl+F. Is there anything I'm missing?
woodruffw 3 hours ago [-]
Oh, I guess I didn't document it explicitly. My bad!
Oh, nice, will look into it, thanks! Let me know if you're aware of any other tools that do this. I had a look before and couldn't find any.
baby_souffle 3 hours ago [-]
The maintenance aspect is relatively straightforward to automate.
Renovate handles this well. Ratchet and pinact can also be used
mmarian 2 hours ago [-]
I mention in the posts the problem with the likes of Renovate. Auto merging is equivalent to semantic versioning. You have to properly vet the influx of updates, and that unfortunately won't work in practice.
recursivedoubts 7 hours ago [-]
Programming in YAML has always seemed crazy to me. Actions seem like a great place to create a simple mixed imperative/declarative scripting language (js extension or whatever) with a solid instrumented/observable/debuggable runtime and an OO API that can be run locally against mock infrastructure.
dnnddidiej 25 minutes ago [-]
Having tried Pulumi for IaC I am not a fan. Pulumi is excellent but the concept is what I am not keen on. It is a rabbit hole for devs and it allows complexity where in Yaml you are forced to KISS.
bastardoperator 6 hours ago [-]
No thanks, Jenkins has three DSL languages and none of it is good. You dont have to inline code in yaml, you can call a script and call it day, write that script in any language you want.
cenamus 5 hours ago [-]
You can do the same in jenkins, but a bit of scripting is probably more readable in Groovy than whatever Yaml dsl.
But I totally agree that the Jenkins langs are terrible, the errors even worse, somehow they managed to make jvm backtraces even more unreadable.
LelouBil 2 hours ago [-]
I don't know why they don't pivot to Kotlin.
Gradle did it successfully and it's great now.
lstodd 4 hours ago [-]
idk I always just wrote shell to be called by jenkins. none of this idiocy of programming with html comboboxes. DSL for the domain is shell, no need to invent hyperwheels here.
fuzzy2 3 hours ago [-]
YAML isn't the problem. It's that every single action is basically curl-to-sudo-bash. Even disregarding the security implications, the ergonomics are truly horrendous. They were with Azure DevOps and they certainly are with GitHub Actions. Bad interfaces, surprising behavior, it's got it all.
CI must only consist of shell commands. No abstractions, no surprises. (Except maybe with PowerShell, where the principle of most surprise rules.)
pydry 3 hours ago [-]
The YAML is way less concerning than the lack of any decent tooling to test and debug the code.
renegade-otter 3 hours ago [-]
When GHA were dead simple, there were projects simulating it locally. It's not possible anymore, and one had to burn a tens of hours just to develop the pipeline.
I apologize in advance for the plug. I've spent the last 5 years warning of the importance of not leaving CI locked in a black box platform and proprietary DSL. All the while going on a quest to reinvent CI as an open, programmable platform. Honestly it's still a work-in-progress: it turns out that reinvention is hard! But, if you want a glimpse of what CI can be when you shed 30 years of legacy, consider checking out Dagger (https://dagger.io).
Or, if you just want to talk about the future of CI with like-minded systems engineers, without committing to using a particular product, consider joining our Discord: https://discord.com/invite/dagger-io
cataflutter 4 hours ago [-]
A while ago I checked this out and the homepage looked like it had fallen to the 'AI hype' trend, you know like how everything was 'AI-native XYZ for Autonomous Agents' at the time. I'm not seeing that now though.
Am I thinking of someone else or did you reverse on that?
shykes 4 hours ago [-]
Yes, that was us. And yes, we reversed on that. The feedback from our community was quite clear :)
mayhemducks 4 hours ago [-]
No apology necessary - I appreciate the straightforward offer of solutions to difficult problems.
sureglymop 5 hours ago [-]
Looks cool. Can it be self hosted? I.e. can I self host it next to my self hosted forgejo instance?
shykes 4 hours ago [-]
Yes, the Dagger engine is open source. Note that the engine on its own is not a CI replacement: it provides a runtime for your pipelines, but you still need an external system to trigger pipelines from git events. This decoupling is intentional, because CI should not be tightly coupled to git events. Sometimes you want to run a pipeline after pushing; but sometimes you need it before pushing, or even before committing. The pipeline runtime therefore should operate at a different layer than git events.
In practice this means you can combine Dagger with, say, Github Actions or another "legacy" CI platform. And use it as runner & event infrastructure for your portable Dagger pipelines.
We also offer a complete Dagger-native CI platform, which combines hosted Dagger engines, git triggers, and all the infrastructure necessary to run your CI end-to-end. That is in early access as part of Dagger Cloud, our commercial offering.
sureglymop 3 hours ago [-]
Well, I'm sold! Trying out your offering this weekend :)
peterldowns 5 hours ago [-]
Yup! Still haven't switched off of Github, but considering it at this point. If you're in my shoes, here's some tools we use that help:
- https://www.warpbuild.com/ for much faster runners (also: runs-on/namespace/buildjet/blacksmith/depot/... take your pick)
- soon moving to Buildkite for orchestration of our CI jobs
I still just need a reasonable alternative for the "store our git repo, allow us to make and merge prs" part of things. Hopefully someone takes all the pieces that the Pierre team is publishing and makes this available soon. The Github UI and the `gh` cli are actually really nice and the existing alternative code storage tools are not great IMO.
a_t48 5 hours ago [-]
Why warpbuild over the alternatives? I've seen depot before and am tempted, but open to other platforms.
suryao 3 hours ago [-]
Founder of WarpBuild here.
We have faster compute: baremetal for amd64 workloads, AWS for arm64 etc.
We optimize for overall performance in real world jobs and have a broad selection of regions/OSes/arch available.
There aren't any fixed subscription fees either.
Great writeup. Though combined with the lack of lockfiles for transitive actions, relying purely on static analysis is tough. Linter like zizmor are great, but they struggle with deep composite actions trees and runtime template injection.
I got frustrated with the lack of security to started working myself on an open-source runtime sandbox for GHA: https://github.com/electricapp/hasp
The first check was inspired by the trivy attack. hasp enforces SHA pinning AND checks that a comment (# v4.1.2) actaully resolves to its preceding SHA. That grew into a larger suite of checks.
Instead of just statically parsing YAML it hooks into the runner env itself. Some of its runtime checks mirror what zizmor already does including resolving upstream SHAs to canonical branches (no impostor commits) and traversing the transitive dependency tree. I have a PR up with a comparison document here (hasp vs. zizmor): https://github.com/electricapp/hasp/pull/13/changes#diff-aab...
Furthermore, it sandboxes itself to prevent sensitive exfiltration by acting as a token broker which injects the secret at runtime -- the GH token can only ever be used to call the GH API. It uses landlock, seccomp, and eBPF via Rust, so no docker. The token broker sandbox can also be used to wrap a generic executable giving hasp generic applicability beyond GHA context (i.e. agentic or other contexts, where token runtime injection seems quite in vogue)
I'm using this as a stopgap until GH rolls out some of the features on its roadmap. I'm moving torward treating the runner as a zero-trust or actively malicious environment, so this was my small contribution on that front.
octorian 3 hours ago [-]
I'm personally not a fan of GitHub actions, because of those dependencies outside your control and more because they're a pain to debug. A lot of the time, it feels like I'm tinkering with this huge script then holding my breath and hoping I got it right.
The reason I use them, however, is because its more trouble than its worth to maintain build servers for the 3 platforms I care about (Windows, macOS, Linux) myself. Especially for projects that get built sporadically. I think one reason for this pain is that while you can easily run VMs for Windows and Linux on the same host, macOS is kinda its own special unicorn and might need a dedicated box. (But even that aside, maintaining machines you don't use every day can get annoying.)
pimeys 1 hours ago [-]
We just use GHA as a simple caller, and everything is coded in nix scripts. The best part of this is how you can call the CI run directly from your own machine and it works the same.
KolmogorovComp 7 hours ago [-]
This should really what LLM ought to bring in terms of security. Be able to break things faster considering it is now easier for the maintainers to fix them.
This has downsides of course, moving further into the "everything rot so fast these days" trope, but we will in a adversarial world where the threat is constantly evolving.
Tomorrow (today) the servers and repo won't be scanned by scripts anymore but by increasingly capable models with knowledge about more security issues than many searchers.
tomaytotomato 7 hours ago [-]
<tangent>
Github actions is running like treacle now. Even when our company pays lots of money for cloud and private Github runners.
I know its the go-to punchbag but I think enabling Copilot reviews globally for a large proportion of Github was a bit hasty.
The security problems aside, if it continues this way, people won't be able to ship and deploy code from Github actions.
We might dare I say it, have to go back to self hosted Jenkins or Travis CI.
tagraves 5 hours ago [-]
shameless self plug, but please check out RWX! (rwx.com)
kfarr 4 hours ago [-]
I still don't understand why the official github pages action is on an account called "peaceiris" ?? peaceiris/actions-gh-pages@v3
pull_request_target is criminally negligent -- github should simply disable it.
The security risk for running unvalidated code on any random PR with access to account secrets has no legitimate use case which outweighs its unbounded risk.
faangguyindia 7 hours ago [-]
I just have a Spot instance we use for our builds. It's turned on via serverless, runs it's job with a timeout and exits.
Lately i don't use any managed services and life couldn't be any simpler.
kevin_nisbet 7 hours ago [-]
My team has been using https://runs-on.com/ for AWS instance runners, had a few glitches but largely been great for using AWS instances for runners.
indigodaddy 7 hours ago [-]
This aligns nicely with today's/current GitHub Actions outage
iso1631 7 hours ago [-]
Github outage? Must be a Y in the day
globular-toast 4 hours ago [-]
I thought GitHub was great back in the day. My account goes back to 2009. It was so much better than what came before, e.g. Sourceforge. Admittedly, the centralised nature was a problem.
I was heartbroken when Microsoft bought it. There should be a way for citizens to rebel against such things. It feels like it's been on a downward trajectory ever since.
ossianericson 4 hours ago [-]
The OIDC federation between the runner and the cloud resources it touches , that credential gets created once. Permissive enough to not block the first deploy, and it is not what is reviewed when a pinning incident happens. Every one is looking at the action. The identity it runs as just sits there.
nulltrace 3 hours ago [-]
Common mistake is trusting the repo instead of the workflow. Then any workflow inherits the same cloud access.
[1]: https://blog.howardjohn.info/posts/blacksmith-gha/
[2]: https://binhong.me/blog/github-action-runner-alternatives/
I feel rather vindicated now. There's still a small possibility of getting supply-chain attacked via a SHA collision, or a relatively much larger (though still small in absolute terms) possibility of getting supply-chain attacked via NPM dependencies of the action you're relying on.
But if you're not using a commit hash in your `uses:` lines, go switch to it now. And if you're just using major-version-only tags like `v5` then do it RIGHT now, before that action gets a compromised version uploaded with a `v5.2.3` tag.
We recently found (in Renovate) some edge cases with how tags work in GitHub Actions which was fun (https://news.ycombinator.com/item?id=47892740) and there's a few things in there Dependabot doesn't seem to support too
https://docs.renovatebot.com/
Indeed. To illustrate why:
1. It is not possible to "retroactively" find a SHA-1 collision for an already known hash. If somebody has produced a SHA-1 hash non-maliciously at any point in the past, it is safe from collisions. This is due to second-preimage resistance, which hasn't been broken for SHA-1 and doesn't seem likely to be broken any time soon.
2. The only way to obtain a SHA-1 collision is to do so knowingly when producing the original hash. You generate a pair of inputs at the same time that both hash to the same value. Certainly, this is an imaginable scenario; e.g. a trusted committer could push one half of the pair wittingly or a reviewer could be fooled into accepting one half of the pair unwittingly, both scenarios creating a timebomb where the malicious actor swaps the commit to the second half of the pair (which presumably carries a malicious payload) later. However, there are two blockers to this approach: Git (not just GitHub) will not accept a commit with a duplicate hash, always sticking with the original one, and GitHub specifically has implemented signature detection for the known SHA-1 collision-generating methods and will reject both halves of such a pair.
In short, there's just no practical way to exploit this weakness of SHA-1 with Git.
I wrote a couple of blog posts on it, and a makeshift way of tackling that https://developerwithacat.com/blog/202604/github-actions-sup...
zizmor (and other tools) correctly recovers vulnerability information for SHA-pinned actions[1].
[1]: https://docs.zizmor.sh/audits/#known-vulnerable-actions
On zizmor, there's no mention of coverage on commit SHA the section you've linked, nor in the entire page when I do Ctrl+F. Is there anything I'm missing?
You can see it in the source here[1].
[1]: https://github.com/zizmorcore/zizmor/blob/db5ed6b3bb445848a8...
Renovate handles this well. Ratchet and pinact can also be used
But I totally agree that the Jenkins langs are terrible, the errors even worse, somehow they managed to make jvm backtraces even more unreadable.
Gradle did it successfully and it's great now.
CI must only consist of shell commands. No abstractions, no surprises. (Except maybe with PowerShell, where the principle of most surprise rules.)
Or, if you just want to talk about the future of CI with like-minded systems engineers, without committing to using a particular product, consider joining our Discord: https://discord.com/invite/dagger-io
Am I thinking of someone else or did you reverse on that?
In practice this means you can combine Dagger with, say, Github Actions or another "legacy" CI platform. And use it as runner & event infrastructure for your portable Dagger pipelines.
We also offer a complete Dagger-native CI platform, which combines hosted Dagger engines, git triggers, and all the infrastructure necessary to run your CI end-to-end. That is in early access as part of Dagger Cloud, our commercial offering.
- https://github.com/sethvargo/ratchet for pinning external Actions/Workflows to specific commit hashes
- https://www.warpbuild.com/ for much faster runners (also: runs-on/namespace/buildjet/blacksmith/depot/... take your pick)
- soon moving to Buildkite for orchestration of our CI jobs
I still just need a reasonable alternative for the "store our git repo, allow us to make and merge prs" part of things. Hopefully someone takes all the pieces that the Pierre team is publishing and makes this available soon. The Github UI and the `gh` cli are actually really nice and the existing alternative code storage tools are not great IMO.
We optimize for overall performance in real world jobs and have a broad selection of regions/OSes/arch available. There aren't any fixed subscription fees either.
I got frustrated with the lack of security to started working myself on an open-source runtime sandbox for GHA: https://github.com/electricapp/hasp
The first check was inspired by the trivy attack. hasp enforces SHA pinning AND checks that a comment (# v4.1.2) actaully resolves to its preceding SHA. That grew into a larger suite of checks.
Instead of just statically parsing YAML it hooks into the runner env itself. Some of its runtime checks mirror what zizmor already does including resolving upstream SHAs to canonical branches (no impostor commits) and traversing the transitive dependency tree. I have a PR up with a comparison document here (hasp vs. zizmor): https://github.com/electricapp/hasp/pull/13/changes#diff-aab...
Furthermore, it sandboxes itself to prevent sensitive exfiltration by acting as a token broker which injects the secret at runtime -- the GH token can only ever be used to call the GH API. It uses landlock, seccomp, and eBPF via Rust, so no docker. The token broker sandbox can also be used to wrap a generic executable giving hasp generic applicability beyond GHA context (i.e. agentic or other contexts, where token runtime injection seems quite in vogue)
I'm using this as a stopgap until GH rolls out some of the features on its roadmap. I'm moving torward treating the runner as a zero-trust or actively malicious environment, so this was my small contribution on that front.
The reason I use them, however, is because its more trouble than its worth to maintain build servers for the 3 platforms I care about (Windows, macOS, Linux) myself. Especially for projects that get built sporadically. I think one reason for this pain is that while you can easily run VMs for Windows and Linux on the same host, macOS is kinda its own special unicorn and might need a dedicated box. (But even that aside, maintaining machines you don't use every day can get annoying.)
This has downsides of course, moving further into the "everything rot so fast these days" trope, but we will in a adversarial world where the threat is constantly evolving.
Tomorrow (today) the servers and repo won't be scanned by scripts anymore but by increasingly capable models with knowledge about more security issues than many searchers.
Github actions is running like treacle now. Even when our company pays lots of money for cloud and private Github runners.
I know its the go-to punchbag but I think enabling Copilot reviews globally for a large proportion of Github was a bit hasty.
The security problems aside, if it continues this way, people won't be able to ship and deploy code from Github actions.
We might dare I say it, have to go back to self hosted Jenkins or Travis CI.
[1]: https://github.com/actions/deploy-pages
The security risk for running unvalidated code on any random PR with access to account secrets has no legitimate use case which outweighs its unbounded risk.
Lately i don't use any managed services and life couldn't be any simpler.
I was heartbroken when Microsoft bought it. There should be a way for citizens to rebel against such things. It feels like it's been on a downward trajectory ever since.