llama.cpp简要教程

1、下载并编译llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

2、下载llama-2-7b-chat
a、可以从fb或hf下载
b、可以使用脚本下载工具,比如llama-dl
c、可以使用Chinese-LLaMA-2-7B
d、可以使用其他三方源

3、模型转换为ggml格式

python3 convert.py ../llama/llama-2-7b-chat/ 
Loading model file ../llama/llama-2-7b-chat/consolidated.00.pth
params = Params(n_vocab=32000, n_embd=4096, n_layer=32, n_ctx=2048, n_ff=11008, n_head=32, n_head_kv=32, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=None, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('../llama/llama-2-7b-chat'))
Found vocab files: {'tokenizer.model': PosixPath('../llama/tokenizer.model'), 'vocab.json': None, 'tokenizer.json': None}
Loading vocab file '../llama/tokenizer.model', type 'spm'
Vocab info: <SentencePieceVocab with 32000 base tokens and 0 added tokens>
Special vocab info: <SpecialVocab with 0 merges, special tokens unset, add special tokens unset>
tok_embeddings.weight                            -> token_embd.weight                        | BF16   | [32000, 4096]
norm.weight                                      -> output_norm.weight                       | BF16   | [4096]
output.weight                                    -> output.weight                            | BF16   | [32000, 4096]
layers.0.attention.wq.weight                     -> blk.0.attn_q.weight                      | BF16   | [4096, 4096]
...
layers.31.ffn_norm.weight                        -> blk.31.ffn_norm.weight                   | BF16   | [4096]
skipping tensor rope_freqs
Writing ../llama/llama-2-7b-chat/ggml-model-f16.gguf, format 1
Ignoring added_tokens.json since model matches vocab size without it.
gguf: This GGUF file is for Little Endian only
[  1/291] Writing tensor token_embd.weight                      | size  32000 x   4096  | type F16  | T+   3
...
[291/291] Writing tensor blk.31.ffn_norm.weight                 | size   4096           | type F32  | T+ 314
Wrote ../llama/llama-2-7b-chat/ggml-model-f16.gguf

4、模型量化,减少资源使用

./quantize ../llama/llama-2-7b-chat/ggml-model-f16.gguf  ../llama/llama-2-7b-chat/ggml-model-f16-q4_0.gguf q4_0 
main: build = 2060 (5ed26e1f)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing '../llama/llama-2-7b-chat/ggml-model-f16.gguf' to '../llama/llama-2-7b-chat/ggml-model-f16-q4_0.gguf' as Q4_0
llama_model_loader: loaded meta data with 15 key-value pairs and 291 tensors from ../llama/llama-2-7b-chat/ggml-model-f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = llama
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 1
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:  226 tensors
llama_model_quantize_internal: meta size = 740928 bytes
[   1/ 291]                    token_embd.weight - [ 4096, 32000,     1,     1], type =    f16, quantizing to q4_0 .. size =   250.00 MiB ->    70.31 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021 
...   
[ 291/ 291]               blk.31.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
llama_model_quantize_internal: model size  = 12853.02 MB
llama_model_quantize_internal: quant size  =  3647.87 MB
llama_model_quantize_internal: hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021 
main: quantize time = 323302.84 ms
main:    total time = 323302.84 ms

5、使用模型

./main -m ../llama/llama-2-7b-chat/ggml-model-f16-q4_0.gguf -n 256 --repeat_penalty 1.0 --color -ins

Leave a Reply

Your email address will not be published. Required fields are marked *

*