GNU bug report logs -
#68455
[PATCH] gnu: llama-cpp: Update to 1873.
Previous Next
Reported by: David Pflug <david <at> pflug.io>
Date: Sun, 14 Jan 2024 20:34:01 UTC
Severity: normal
Tags: patch
Done: André Batista <nandre <at> riseup.net>
Bug is archived. No further changes may be made.
Full log
Message #8 received at 68455 <at> debbugs.gnu.org (full text, mbox):
Hello David,
> +(define-public python-gguf
> + (package
> + (name "python-gguf")
> + (version "0.6.0")
> + (source
> + (origin
> + (method url-fetch)
> + (uri (pypi-uri "gguf" version))
> + (sha256
> + (base32 "0rbyc2h3kpqnrvbyjvv8a69l577jv55a31l12jnw21m1lamjxqmj"))))
> + (build-system pyproject-build-system)
> + (arguments
> + `(#:phases
> + (modify-phases %standard-phases
> + (delete 'check))))
> + (inputs (list poetry python-pytest))
> + (propagated-inputs (list python-numpy))
> + (home-page "https://ggml.ai")
> + (synopsis "Read and write ML models in GGUF for GGML")
> + (description "Read and write ML models in GGUF for GGML")
> + (license license:expat)))
This should be part of a separate patch. Can you send a v2?
Thanks,
Mathieu
This bug report was last modified 168 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.