This provides an option for #223 without fully resolving it. (I think.)
Essentially, it acts very similar to the `gzip_static` and similar options for nginx, where it will check for the existence of pre-compressed files and serve those instead if the client allows it. I couldn't find a pre-existing way to actually parse the Accept-Encoding header properly (admittedly didn't look very hard) and just implemented one on my own that should be fine.
This should hopefully not have the same DOS vulnerabilities as #302, since it relies on the existing caching system. Compressed versions of files will be cached just like any other files, and that includes cache for missing files as well.
The compressed files will also be accessible directly, and this won't automatically decompress them. So, if you have a `tar.gz` file that you access directly, it will still be downloaded as the gzipped version, although you will now gain the option to download the `.tar` directly and decompress it in transit. (Which doesn't affect the server at all, just the client's way of interpreting it.)
----
One key thing this change also adds is a short-circuit when accessing directories: these always return 404 via the API, although they'd try the cache anyway and go through that route, which was kind of slow. Adding in the additional encodings, it's going to try for .gz, .br, and .zst files in the worst case as well, which feels wrong. So, instead, it just always falls back to the index-check behaviour if the path ends in a slash or is empty. (Which is implicitly just a slash.)
----
For testing, I set up this repo: https://codeberg.org/clarfonthey/testrepo
I ended up realising that LFS wasn't supported by default with `just dev`, so, it ended up working until I made sure the files on the repo *didn't* use LFS.
Assuming you've run `just dev`, you can go directly to this page in the browser here: https://clarfonthey.localhost.mock.directory:4430/testrepo/
And also you can try a few cURL commands:
```shell
curl https://clarfonthey.localhost.mock.directory:4430/testrepo/ --verbose --insecure
curl -H 'Accept-Encoding: gz' https://clarfonthey.localhost.mock.directory:4430/testrepo/ --verbose --insecure | gunzip -
curl -H 'Accept-Encoding: br' https://clarfonthey.localhost.mock.directory:4430/testrepo/ --verbose --insecure | brotli --decompress -
curl -H 'Accept-Encoding: zst' https://clarfonthey.localhost.mock.directory:4430/testrepo/ --verbose --insecure | zstd --decompress -
```
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/387
Reviewed-by: Gusted <gusted@noreply.codeberg.org>
Co-authored-by: ltdk <usr@ltdk.xyz>
Co-committed-by: ltdk <usr@ltdk.xyz>
This PR add the `$NO_DNS_01` option (disabled by default) that removes the DNS ACME provider, and replaces the wildcard certificate by individual certificates obtained using the TLS ACME provider.
This option allows an instance to work without having to manage access tokens for the DNS provider. On the flip side, this means that a certificate can be requested for each subdomains. To limit the risk of DOS, the existence of the user/org corresponding to a subdomain is checked before requesting a cert, however, this limitation is not enough for an forge with a high number of users/orgs.
Co-authored-by: 6543 <6543@obermui.de>
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/290
Reviewed-by: Moritz Marquardt <momar@noreply.codeberg.org>
Co-authored-by: Jean-Marie 'Histausse' Mineau <histausse@protonmail.com>
Co-committed-by: Jean-Marie 'Histausse' Mineau <histausse@protonmail.com>
- Currently if the canonical domain validations fails(either for
legitimate reasons or for bug reasons like the request to Gitea/Forgejo
failing) it will use main domain certificate, which in the case for
custom domains will warrant a security error as the certificate isn't
issued to the custom domain.
- This patch handles this situation more gracefully and instead only
disallow obtaining a certificate if the domain validation fails, so in
the case that a certificate still exists it can still be used even if
the canonical domain validation fails. There's a small side effect,
legitimate users that remove domains from `.domain` will still be able
to use the removed domain(as long as the DNS records exists) as long as
the certificate currently hold by pages-server isn't expired.
- Given the increased usage in custom domains that are resulting in
errors, I think it ways more than the side effect.
- In order to future-proof against future slowdowns of instances, add a retry mechanism to the domain validation function, such that it's more likely to succeed even if the instance is not responding.
- Refactor the code a bit and add some comments.
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-authored-by: 6543 <6543@obermui.de>
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/160
Reviewed-by: 6543 <6543@obermui.de>
Co-authored-by: Gusted <gusted@noreply.codeberg.org>
Co-committed-by: Gusted <gusted@noreply.codeberg.org>
Added new TockenBucket named `acmeClientFailLimit` to avoid being banned because of the [Failed validation limit](https://letsencrypt.org/docs/failed-validation-limit/) of Let's Encrypt.
The behaviour is similar to the other limiters blocking the `obtainCert` func ensuring rate under limit.
Co-authored-by: fsologureng <sologuren@estudiohum.cl>
Co-authored-by: 6543 <6543@obermui.de>
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/151
Reviewed-by: 6543 <6543@obermui.de>
Co-authored-by: Felipe Leopoldo Sologuren Gutiérrez <fsologureng@noreply.codeberg.org>
Co-committed-by: Felipe Leopoldo Sologuren Gutiérrez <fsologureng@noreply.codeberg.org>
- It's not guaranteed that `tls.X509KeyPair` will set `c.Leaf`.
- This patch fixes this by using a wrapper that parses the leaf
certificate(in bytes) if `c.Leaf` wasn't set.
- Resolves#149
Co-authored-by: Gusted <postmaster@gusted.xyz>
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/150
Reviewed-by: 6543 <6543@obermui.de>
Co-authored-by: Gusted <gusted@noreply.codeberg.org>
Co-committed-by: Gusted <gusted@noreply.codeberg.org>
As per [the documentation](https://pkg.go.dev/net/http#Serve), it doesn't enable HTTP2 by-default, unless we enable it via the `NextProtos` option.
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/137
Reviewed-by: 6543 <6543@obermui.de>
Co-authored-by: Gusted <gusted@noreply.codeberg.org>
Co-committed-by: Gusted <gusted@noreply.codeberg.org>
we have big functions that handle all stuff ... we should split this into smaler chuncks so we could test them seperate and make clear cuts in what happens where
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/135
- For production(*cough* Codeberg *cough*), it's important to not use
mock certs. So fail right from the start if this is the case and not try
to "handle it gracefully", as it would break production.
- Resolves#131
CC @6543
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/133
Reviewed-by: 6543 <6543@obermui.de>
Co-authored-by: Gusted <gusted@noreply.codeberg.org>
Co-committed-by: Gusted <gusted@noreply.codeberg.org>
- Actually log useful information at their respective log level.
- Add logs in hot-paths to be able to deep-dive and debug specific requests (see server/handler.go)
- Add more information to existing fields(e.g. the host that the user is visiting, this was noted by @fnetX).
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Reviewed-on: https://codeberg.org/Codeberg/pages-server/pulls/116
Reviewed-by: 6543 <6543@noreply.codeberg.org>
Co-authored-by: Gusted <gusted@noreply.codeberg.org>
Co-committed-by: Gusted <gusted@noreply.codeberg.org>