NVIDIA RTX 3090 — 24GB

The go-to 24GB card for local AI. Fast, widely available, huge community support.

Specifications

BrandNVIDIA
ModelRTX 3090
VRAM24GB
ArchitectureAmpere
CUDA / Stream Processors10,496
Memory Bandwidth936 GB/s
TDP350W
FP32 TFLOPS36

Buy Now

Prices last updated:

GPUDojo is reader-supported. When you buy through links on our site, we may earn an affiliate commission.

Price History

Price tracking started — chart will appear after the next snapshot.

For AI / LLM Use

Solid choice for 30B models and comfortable 14B inference.

What Models Can It Run?

  • 30B Q4_K_M, 14B full precision, 70B Q2 (tight)
  • 14B Q6_K, 30B Q3_K (tight)
  • 14B Q4_K_M, 7B full precision
  • 7B Q6_K, 14B Q3_K (tight)
  • 7B Q4_K_M only

Estimated Performance

Generation: ~70 tokens/sec

Prefill: ~643 tokens/sec

Recommended Quantisations

  • Q4_K_M recommended for 30B models
  • Q6_K or Q8 for 14B and below
  • Full precision for 7B

Pros & Cons

Pros

  • 24GB VRAM — handles large models
  • High memory bandwidth for fast generation
  • Ampere architecture — good software support
  • Consumer card — easy to install, display output

Cons

  • 350W TDP — high power draw

Community Verdict

Average score: 9.1/10 (1 review)

  • TechPowerUp 9.1/10

    Excellent performance across the board with 24GB VRAM making it future-proof for AI workloads.

    Source
  • r/LocalLLaMA

    Community favourite for 24GB VRAM at reasonable prices. Widely recommended as the best value for running 30B+ models.

    Source