The available options for running your own private Docker registry aren’t great. The official distribution image is insecure unless you set up an auth proxy in front of it, which feels like a weird hack. The other solutions (Quay, GitLab, etc.) feel like overkill. AWS Elastic Container Registry, on the other hand, is cheap and simple. The problem? They’re following security best practices and don’t allow you to get a static shared key for auth, making connecting your cluster to it nontrivial. Fortunately, there’s a solution: the ECR Credential Provider. However, setting this up isn’t fully documented anywhere that I could find, so I’ve decided to do that here.
I’m writing this on November 24, 2025, with K3s v1.33.5 and ECR cred provider 1.31.9. It should work with future versions, and I’ll try to remember to update this if I need to make any changes in the future, but keep that in mind if you run into any issues. Also, this assumes basic familiarity with Docker, Kubernetes, and AWS. I’m targeting someone with the level of knowledge I had yesterday, not writing an exhaustive tutorial which covers everything involved.
First, clone the cred provider’s repo, since they don’t provide prebuilt binaries:
git clone https://github.com/kubernetes/cloud-provider-aws.gitWarning
They publish release tarballs on GitHub, but the build attempts to grab a version string from git. If you build it from a tarball, you’ll be missing that and the kubelet will fail to load the plugin later, with the error
plugin.go: Failed getting credential from external registry credential provider: error execing credential provider plugin ecr-credential-provider for image example.com/imagename:latest: exit status 2: panic: version string "" doesn't match expected regular expression: "^v(\d+\.\d+\.\d+)"
Next, build the plugin:
cd cloud-provider-aws
make ecr-credential-providerCopy it to your kubelet’s plugin directory. For K3s, that looks like this:
mv ecr-credential-provider /var/lib/rancher/credentialprovider/bin/Note
If you’re using upstream Kubernetes, minikube, microk8s, etc., the paths will be different. This guide should otherwise still work (though I haven’t tested with anything other than K3s). You could also put it somewhere else and pass in an argument to the kubelet to tell it to look there instead.
Then, create a configuration file to tell kubelet to load the plugin. For K3s, this file should be placed at /var/lib/rancher/credentialprovider/config.yaml. The same location caveats as before apply here as well. The configuration file should contain the following content:
apiVersion: kubelet.config.k8s.io/v1
kind: CredentialProviderConfig
providers:
- name: ecr-credential-provider
matchImages:
- '*.dkr.ecr.*.amazonaws.com'
- '*.dkr.ecr.*.amazonaws.com.cn'
apiVersion: credentialprovider.kubelet.k8s.io/v1
defaultCacheDuration: '0'Finally, create an AWS IAM user with ECR pull permissions. How to accomplish this is left as an exercise to the reader. Grab its access key and secret key and place them in /root/.aws/credentials:
[default]
aws_access_key_id = AKIxxxxxxxxxxxxxxx
aws_secret_access_key = blahblahblahTip
You could instead specify your AWS credentials directly in your
config.yamlfile, under theenvkey, as documented here. Personally, I find the ability to pull images directly via a root SSH convenient for debugging, so I took this approach.
Restart the kubelet:
systemctl restart k3sYou’re done. You should be able to pull images from your ECR now. If it’s not working, check your kubelet logs for errors.
Further reading: