Download raw body.
gotwebd: openat(2) and less params for scraping repo info
On Tue, Nov 22, 2022 at 07:43:17PM +0100, Omar Polo wrote: > On 2022/11/22 19:23:17 +0100, Stefan Sperling <stsp@stsp.name> wrote: > > On Tue, Nov 22, 2022 at 12:07:49PM +0100, Omar Polo wrote: > > > - if (asprintf(&d_file, "%s/description", dir) == -1) > > > - return got_error_from_errno("asprintf"); > > > - > > > - f = fopen(d_file, "r"); > > > - if (f == NULL) { > > > + fd = openat(dir, "description", O_RDONLY); > > > + if (fd == -1) { > > > if (errno != ENOENT && errno != EACCES) > > > - error = got_error_from_errno2("fopen", d_file); > > > + error = got_error_from_errno2("openat", "description"); > > > > The error message previously contained the whole path. > > It might be more useful to see an absolute path in error logs. > > If you want to retain this you could pass both the path to the directory > > and the open fd, and then use got_error_from_errno_fmt() here. > > ah, right. for some errors it's useful to log the full path! > > > Same applies to gotweb_get_clone_url(). > > > > > goto done; > > > } > > > > > > - if (fseek(f, 0, SEEK_END) == -1) { > > > - error = got_ferror(f, GOT_ERR_IO); > > > - goto done; > > > - } > > > - len = ftell(f); > > > + len = lseek(fd, 0, SEEK_END); > > > if (len == -1) { > > > - error = got_ferror(f, GOT_ERR_IO); > > > + error = got_error_from_errno("lseek"); > > > goto done; > > > > This could instead use fstat() and read sb.st_size. That is what > > we usually do and avoids having to seek back to the beginning below. > > agreed. I kept the seeks because it was already there, but fstat is cleaner. > > Thanks! Looks nice. ok stsp@ > P.S.: the check len < SIZE_MAX - 1 is there just to avoid the implicit > cast. It's silly to try to calloc a number like that, but here i'd > like to focus on just switchi to openat and reducing the seeks, a > sensible upper limit can be discussed and applied later. Yeah. We should probably be asking getrlimit() for RLIMIT_DATA more often.
gotwebd: openat(2) and less params for scraping repo info