![]() ![]() ![]() Users/aryasarukkai/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. Maybe you need to compile it from source?ĬUDA SETUP: Defaulting to libbitsandbytes_cpu.so.ĭlopen(/Users/aryasarukkai/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so, 0x0006): tried: '/Users/aryasarukkai/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (not a mach-o file) For bug reports, please submit your error trace to: ĬUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. ModuleNotFoundError: No module named 'llama_inference_offload' Shared.model, shared.tokenizer = load_model(shared.model_name)įile "E:\vicuna-chatgpt4\oobabooga-windows\text-generation-webui\modules\models.py", line 100, in load_modelįrom modules.GPTQ_loader import load_quantizedįile "E:\vicuna-chatgpt4\oobabooga-windows\text-generation-webui\modules\GPTQ_loader.py", line 14, in Loading anon8231489123_vicuna-13b-GPTQ-4bit-128g.įile "E:\vicuna-chatgpt4\oobabooga-windows\text-generation-webui\server.py", line 302, in Warn("The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable. ![]() CUDA SETUP: Loading binary E:\vicuna-chatgpt4\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll.Į:\vicuna-chatgpt4\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |