Search Skills

Search for skills or navigate to categories

SkillforthatSkillforthat
AI & Machine Learning
T

tensorrt-llm

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency.

Category

AI & Machine Learning

Author

davila7

Updated

Jan 2026

Tags

6

Install Command

claude skill add davila7/claude-code-templates

Description

Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

Tags

NVIDIA TensorRTLLMInferenceAIDeep LearningGPU

Information

Developerdavila7
CategoryAI & Machine Learning
CreatedJan 15, 2026
UpdatedJan 15, 2026

You Might Also Like