GNU bug report logs - #68455
[PATCH] gnu: llama-cpp: Update to 1873.

Previous Next

Package: guix-patches;

Reported by: David Pflug <david <at> pflug.io>

Date: Sun, 14 Jan 2024 20:34:01 UTC

Severity: normal

Tags: patch

Done: André Batista <nandre <at> riseup.net>

Bug is archived. No further changes may be made.

To add a comment to this bug, you must first unarchive it, by sending
a message to control AT debbugs.gnu.org, with unarchive 68455 in the body.
You can then email your comments to 68455 AT debbugs.gnu.org in the normal way.

Toggle the display of automated, internal messages from the tracker.

View this report as an mbox folder, status mbox, maintainer mbox


Report forwarded to guix-patches <at> gnu.org:
bug#68455; Package guix-patches. (Sun, 14 Jan 2024 20:34:02 GMT) Full text and rfc822 format available.

Acknowledgement sent to David Pflug <david <at> pflug.io>:
New bug report received and forwarded. Copy sent to guix-patches <at> gnu.org. (Sun, 14 Jan 2024 20:34:02 GMT) Full text and rfc822 format available.

Message #5 received at submit <at> debbugs.gnu.org (full text, mbox):

From: David Pflug <david <at> pflug.io>
To: guix-patches <at> gnu.org
Cc: David Pflug <david <at> pflug.io>
Subject: [PATCH] gnu: llama-cpp: Update to 1873.
Date: Sun, 14 Jan 2024 15:32:45 -0500
* gnu/packages/machine-learning.scm (llama-cpp): Update to 1873.

Change-Id: I091cd20192743c87b497ea3c5fd18a75ada75d9d
---
 gnu/packages/machine-learning.scm | 133 ++++++++++++++++++------------
 1 file changed, 78 insertions(+), 55 deletions(-)

diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index 1616738399..0cdfe7bb08 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -22,6 +22,7 @@
 ;;; Copyright © 2023 Navid Afkhami <navid.afkhami <at> mdc-berlin.de>
 ;;; Copyright © 2023 Zheng Junjie <873216071 <at> qq.com>
 ;;; Copyright © 2023 Troy Figiel <troy <at> troyfigiel.com>
+;;; Copyright © 2023 David Pflug <david <at> pflug.io>
 ;;;
 ;;; This file is part of GNU Guix.
 ;;;
@@ -517,63 +518,63 @@ (define-public guile-aiscm-next
   (deprecated-package "guile-aiscm-next" guile-aiscm))
 
 (define-public llama-cpp
-  (let ((commit "f31b5397143009d682db90fd2a6cde83f1ef00eb")
-        (revision "0"))
-    (package
-      (name "llama-cpp")
-      (version (git-version "0.0.0" revision commit))
-      (source
-       (origin
-         (method git-fetch)
-         (uri (git-reference
-               (url "https://github.com/ggerganov/llama.cpp")
-               (commit (string-append "master-" (string-take commit 7)))))
-         (file-name (git-file-name name version))
-         (sha256
-          (base32 "0ys6n53n032zq1ll9f3vgxk8sw0qq7x3fi7awsyy13adzp3hn08p"))))
-      (build-system cmake-build-system)
-      (arguments
-       (list
-        #:modules '((ice-9 textual-ports)
-                    (guix build utils)
-                    ((guix build python-build-system) #:prefix python:)
-                    (guix build cmake-build-system))
-        #:imported-modules `(,@%cmake-build-system-modules
-                             (guix build python-build-system))
-        #:phases
-        #~(modify-phases %standard-phases
-            (add-before 'install 'install-python-scripts
-              (lambda _
-                (let ((bin (string-append #$output "/bin/")))
-                  (define (make-script script)
-                    (let ((suffix (if (string-suffix? ".py" script) "" ".py")))
-                      (call-with-input-file
-                          (string-append "../source/" script suffix)
-                        (lambda (input)
-                          (call-with-output-file (string-append bin script)
-                            (lambda (output)
-                              (format output "#!~a/bin/python3\n~a"
-                                      #$(this-package-input "python")
-                                      (get-string-all input))))))
-                      (chmod (string-append bin script) #o555)))
-                  (mkdir-p bin)
-                  (make-script "convert-pth-to-ggml")
-                  (make-script "convert-lora-to-ggml")
-                  (make-script "convert"))))
-            (add-after 'install-python-scripts 'wrap-python-scripts
-              (assoc-ref python:%standard-phases 'wrap))
-            (replace 'install
-              (lambda _
-                (copy-file "bin/main" (string-append #$output "/bin/llama")))))))
-      (inputs (list python))
-      (propagated-inputs
-       (list python-numpy python-pytorch python-sentencepiece))
-      (home-page "https://github.com/ggerganov/llama.cpp")
-      (synopsis "Port of Facebook's LLaMA model in C/C++")
-      (description "This package provides a port to Facebook's LLaMA collection
+  (package
+    (name "llama-cpp")
+    (version "1873")
+    (source
+     (origin
+       (method git-fetch)
+       (uri (git-reference
+             (url "https://github.com/ggerganov/llama.cpp")
+             (commit (string-append "b" version))))
+       (file-name (git-file-name name version))
+       (sha256
+        (base32 "11may9gkafg5bfma5incijvkypjgx9778gmygxp3x2dz1140809d"))))
+    (build-system cmake-build-system)
+    (arguments
+     (list
+      #:modules '((ice-9 textual-ports)
+                  (guix build utils)
+                  ((guix build python-build-system) #:prefix python:)
+                  (guix build cmake-build-system))
+      #:imported-modules `(,@%cmake-build-system-modules
+                           (guix build python-build-system))
+      #:phases
+      #~(modify-phases %standard-phases
+          (add-before 'install 'install-python-scripts
+            (lambda _
+              (let ((bin (string-append #$output "/bin/")))
+                (define (make-script script)
+                  (let ((suffix (if (string-suffix? ".py" script) "" ".py")))
+                    (call-with-input-file
+                        (string-append "../source/" script suffix)
+                      (lambda (input)
+                        (call-with-output-file (string-append bin script)
+                          (lambda (output)
+                            (format output "#!~a/bin/python3\n~a"
+                                    #$(this-package-input "python")
+                                    (get-string-all input))))))
+                    (chmod (string-append bin script) #o555)))
+                (mkdir-p bin)
+                (make-script "convert-hf-to-gguf")
+                (make-script "convert-llama-ggml-to-gguf")
+                (make-script "convert-lora-to-ggml")
+                (make-script "convert-persimmon-to-gguf")
+                (make-script "convert"))))
+          (add-after 'install-python-scripts 'wrap-python-scripts
+            (assoc-ref python:%standard-phases 'wrap))
+          (replace 'install
+            (lambda _
+              (copy-file "bin/main" (string-append #$output "/bin/llama")))))))
+    (inputs (list python))
+    (propagated-inputs
+     (list python-numpy python-pytorch python-sentencepiece python-gguf))
+    (home-page "https://github.com/ggerganov/llama.cpp")
+    (synopsis "Port of Facebook's LLaMA model in C/C++")
+    (description "This package provides a port to Facebook's LLaMA collection
 of foundation language models.  It requires models parameters to be downloaded
 independently to be able to run a LLaMA model.")
-      (license license:expat))))
+    (license license:expat)))
 
 (define-public mcl
   (package
@@ -5257,3 +5258,25 @@ (define-public oneapi-dnnl
      "OneAPI Deep Neural Network Library (oneDNN) is a cross-platform
 performance library of basic building blocks for deep learning applications.")
     (license license:asl2.0)))
+
+(define-public python-gguf
+  (package
+    (name "python-gguf")
+    (version "0.6.0")
+    (source
+     (origin
+       (method url-fetch)
+       (uri (pypi-uri "gguf" version))
+       (sha256
+        (base32 "0rbyc2h3kpqnrvbyjvv8a69l577jv55a31l12jnw21m1lamjxqmj"))))
+    (build-system pyproject-build-system)
+    (arguments
+      `(#:phases
+        (modify-phases %standard-phases
+                       (delete 'check))))
+    (inputs (list poetry python-pytest))
+    (propagated-inputs (list python-numpy))
+    (home-page "https://ggml.ai")
+    (synopsis "Read and write ML models in GGUF for GGML")
+    (description "Read and write ML models in GGUF for GGML")
+    (license license:expat)))

base-commit: 18393fcdddf5c3d834fa89ebf5f3925fc5b166ed
-- 
2.41.0





Information forwarded to guix-patches <at> gnu.org:
bug#68455; Package guix-patches. (Wed, 17 Jan 2024 17:30:02 GMT) Full text and rfc822 format available.

Message #8 received at 68455 <at> debbugs.gnu.org (full text, mbox):

From: Mathieu Othacehe <othacehe <at> gnu.org>
To: David Pflug <david <at> pflug.io>
Cc: 68455 <at> debbugs.gnu.org
Subject: Re: [bug#68455] [PATCH] gnu: llama-cpp: Update to 1873.
Date: Wed, 17 Jan 2024 18:29:35 +0100
Hello David,

> +(define-public python-gguf
> +  (package
> +    (name "python-gguf")
> +    (version "0.6.0")
> +    (source
> +     (origin
> +       (method url-fetch)
> +       (uri (pypi-uri "gguf" version))
> +       (sha256
> +        (base32 "0rbyc2h3kpqnrvbyjvv8a69l577jv55a31l12jnw21m1lamjxqmj"))))
> +    (build-system pyproject-build-system)
> +    (arguments
> +      `(#:phases
> +        (modify-phases %standard-phases
> +                       (delete 'check))))
> +    (inputs (list poetry python-pytest))
> +    (propagated-inputs (list python-numpy))
> +    (home-page "https://ggml.ai")
> +    (synopsis "Read and write ML models in GGUF for GGML")
> +    (description "Read and write ML models in GGUF for GGML")
> +    (license license:expat)))

This should be part of a separate patch. Can you send a v2?

Thanks,

Mathieu




Information forwarded to guix-patches <at> gnu.org:
bug#68455; Package guix-patches. (Fri, 26 Jan 2024 12:22:01 GMT) Full text and rfc822 format available.

Message #11 received at 68455 <at> debbugs.gnu.org (full text, mbox):

From: David Pflug <david <at> pflug.io>
To: 68455 <at> debbugs.gnu.org
Cc: David Pflug <david <at> pflug.io>
Subject: [PATCH v2] gnu: llama-cpp: Update to 1873.
Date: Fri, 26 Jan 2024 07:20:21 -0500
* gnu/packages/machine-learning.scm (llama-cpp): Update to 1873.

python-gguf added by #68735

Change-Id: I091cd20192743c87b497ea3c5fd18a75ada75d9d
---
 gnu/packages/machine-learning.scm | 110 +++++++++++++++---------------
 1 file changed, 55 insertions(+), 55 deletions(-)

diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index 0e88f7265b..1d590d1c1b 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -519,63 +519,63 @@ (define-public guile-aiscm-next
   (deprecated-package "guile-aiscm-next" guile-aiscm))
 
 (define-public llama-cpp
-  (let ((commit "f31b5397143009d682db90fd2a6cde83f1ef00eb")
-        (revision "0"))
-    (package
-      (name "llama-cpp")
-      (version (git-version "0.0.0" revision commit))
-      (source
-       (origin
-         (method git-fetch)
-         (uri (git-reference
-               (url "https://github.com/ggerganov/llama.cpp")
-               (commit (string-append "master-" (string-take commit 7)))))
-         (file-name (git-file-name name version))
-         (sha256
-          (base32 "0ys6n53n032zq1ll9f3vgxk8sw0qq7x3fi7awsyy13adzp3hn08p"))))
-      (build-system cmake-build-system)
-      (arguments
-       (list
-        #:modules '((ice-9 textual-ports)
-                    (guix build utils)
-                    ((guix build python-build-system) #:prefix python:)
-                    (guix build cmake-build-system))
-        #:imported-modules `(,@%cmake-build-system-modules
-                             (guix build python-build-system))
-        #:phases
-        #~(modify-phases %standard-phases
-            (add-before 'install 'install-python-scripts
-              (lambda _
-                (let ((bin (string-append #$output "/bin/")))
-                  (define (make-script script)
-                    (let ((suffix (if (string-suffix? ".py" script) "" ".py")))
-                      (call-with-input-file
-                          (string-append "../source/" script suffix)
-                        (lambda (input)
-                          (call-with-output-file (string-append bin script)
-                            (lambda (output)
-                              (format output "#!~a/bin/python3\n~a"
-                                      #$(this-package-input "python")
-                                      (get-string-all input))))))
-                      (chmod (string-append bin script) #o555)))
-                  (mkdir-p bin)
-                  (make-script "convert-pth-to-ggml")
-                  (make-script "convert-lora-to-ggml")
-                  (make-script "convert"))))
-            (add-after 'install-python-scripts 'wrap-python-scripts
-              (assoc-ref python:%standard-phases 'wrap))
-            (replace 'install
-              (lambda _
-                (copy-file "bin/main" (string-append #$output "/bin/llama")))))))
-      (inputs (list python))
-      (propagated-inputs
-       (list python-numpy python-pytorch python-sentencepiece))
-      (home-page "https://github.com/ggerganov/llama.cpp")
-      (synopsis "Port of Facebook's LLaMA model in C/C++")
-      (description "This package provides a port to Facebook's LLaMA collection
+  (package
+    (name "llama-cpp")
+    (version "1873")
+    (source
+     (origin
+       (method git-fetch)
+       (uri (git-reference
+             (url "https://github.com/ggerganov/llama.cpp")
+             (commit (string-append "b" version))))
+       (file-name (git-file-name name version))
+       (sha256
+        (base32 "11may9gkafg5bfma5incijvkypjgx9778gmygxp3x2dz1140809d"))))
+    (build-system cmake-build-system)
+    (arguments
+     (list
+      #:modules '((ice-9 textual-ports)
+                  (guix build utils)
+                  ((guix build python-build-system) #:prefix python:)
+                  (guix build cmake-build-system))
+      #:imported-modules `(,@%cmake-build-system-modules
+                           (guix build python-build-system))
+      #:phases
+      #~(modify-phases %standard-phases
+          (add-before 'install 'install-python-scripts
+            (lambda _
+              (let ((bin (string-append #$output "/bin/")))
+                (define (make-script script)
+                  (let ((suffix (if (string-suffix? ".py" script) "" ".py")))
+                    (call-with-input-file
+                        (string-append "../source/" script suffix)
+                      (lambda (input)
+                        (call-with-output-file (string-append bin script)
+                          (lambda (output)
+                            (format output "#!~a/bin/python3\n~a"
+                                    #$(this-package-input "python")
+                                    (get-string-all input))))))
+                    (chmod (string-append bin script) #o555)))
+                (mkdir-p bin)
+                (make-script "convert-hf-to-gguf")
+                (make-script "convert-llama-ggml-to-gguf")
+                (make-script "convert-lora-to-ggml")
+                (make-script "convert-persimmon-to-gguf")
+                (make-script "convert"))))
+          (add-after 'install-python-scripts 'wrap-python-scripts
+            (assoc-ref python:%standard-phases 'wrap))
+          (replace 'install
+            (lambda _
+              (copy-file "bin/main" (string-append #$output "/bin/llama")))))))
+    (inputs (list python))
+    (propagated-inputs
+     (list python-numpy python-pytorch python-sentencepiece python-gguf))
+    (home-page "https://github.com/ggerganov/llama.cpp")
+    (synopsis "Port of Facebook's LLaMA model in C/C++")
+    (description "This package provides a port to Facebook's LLaMA collection
 of foundation language models.  It requires models parameters to be downloaded
 independently to be able to run a LLaMA model.")
-      (license license:expat))))
+    (license license:expat)))
 
 (define-public mcl
   (package

base-commit: c5453fbfeb0dbd19cb402199fe1e5ad51a051e56
-- 
2.41.0





Information forwarded to guix-patches <at> gnu.org:
bug#68455; Package guix-patches. (Sat, 23 Nov 2024 01:28:01 GMT) Full text and rfc822 format available.

Message #14 received at 68455 <at> debbugs.gnu.org (full text, mbox):

From: David Pflug <david <at> pflug.io>
To: 68455 <at> debbugs.gnu.org
Subject: Re: [bug#68455] [PATCH] gnu: llama-cpp: Update to 1873.
Date: Fri, 22 Nov 2024 20:26:35 -0500
This can be closed. The package has moved on well beyond this commit.
See #70883.

Thanks,




Reply sent to André Batista <nandre <at> riseup.net>:
You have taken responsibility. (Thu, 05 Dec 2024 21:15:02 GMT) Full text and rfc822 format available.

Notification sent to David Pflug <david <at> pflug.io>:
bug acknowledged by developer. (Thu, 05 Dec 2024 21:15:02 GMT) Full text and rfc822 format available.

Message #19 received at 68455-done <at> debbugs.gnu.org (full text, mbox):

From: André Batista <nandre <at> riseup.net>
To: David Pflug <david <at> pflug.io>
Cc: 68455-done <at> debbugs.gnu.org
Subject: Re: [bug#68455] Close.
Date: Thu, 5 Dec 2024 18:14:34 -0300
Hi David,

sex 22 nov 2024 às 20:26:35 (1732317995), david <at> pflug.io enviou:
> This can be closed. The package has moved on well beyond this commit.
> See #70883.
> 

You can close your own bug reports by adding "-done" in between the bug
number and the "@" char on the email address, such as I've done here.

See <https://debbugs.gnu.org/Developer.html> for more info.

Thanks




bug archived. Request was from Debbugs Internal Request <help-debbugs <at> gnu.org> to internal_control <at> debbugs.gnu.org. (Fri, 03 Jan 2025 12:24:08 GMT) Full text and rfc822 format available.

This bug report was last modified 167 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.