quantizing-models-bitsandbytes
Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss
Description
Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4, FP4 formats, QLoRA training, and 8-bit optimizers. Works with HuggingFace Transformers.
Skill File
Tags
Information
You Might Also Like
Add Uint Support
Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros
Docstring
Write docstrings for PyTorch functions and methods following PyTorch conventions
Skill Creator
Guide for creating effective skills
Claude Opus 4 5 Migration
Migrate prompts and code from Claude Sonnet 4
Agent Identifier
This skill should be used when the user asks to "create an agent", "add an agent", "write a subag...
Command Development
This skill should be used when the user asks to "create a slash command", "add a command", "write...